Test Report: Hyper-V_Windows 19790

                    
                      b9d2e2c9658f87d0032c63e9ff5f9056e8d14f14:2024-10-14:36644
                    
                

Test fail (18/203)

x
+
TestErrorSpam/setup (192.1s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-274900 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 --driver=hyperv
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-274900 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 --driver=hyperv: (3m12.0960665s)
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube VM"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-274900] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
- KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
- MINIKUBE_LOCATION=19790
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-274900" primary control-plane node in "nospam-274900" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-274900" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (192.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (32.96s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:735: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-572000 -n functional-572000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-572000 -n functional-572000: (11.6701872s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 logs -n 25: (8.3790512s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-274900 --log_dir                                     | nospam-274900     | minikube1\jenkins | v1.34.0 | 14 Oct 24 06:58 PDT | 14 Oct 24 06:58 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-274900 --log_dir                                     | nospam-274900     | minikube1\jenkins | v1.34.0 | 14 Oct 24 06:58 PDT | 14 Oct 24 06:59 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-274900 --log_dir                                     | nospam-274900     | minikube1\jenkins | v1.34.0 | 14 Oct 24 06:59 PDT | 14 Oct 24 06:59 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-274900 --log_dir                                     | nospam-274900     | minikube1\jenkins | v1.34.0 | 14 Oct 24 06:59 PDT | 14 Oct 24 06:59 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-274900 --log_dir                                     | nospam-274900     | minikube1\jenkins | v1.34.0 | 14 Oct 24 06:59 PDT | 14 Oct 24 06:59 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-274900 --log_dir                                     | nospam-274900     | minikube1\jenkins | v1.34.0 | 14 Oct 24 06:59 PDT | 14 Oct 24 07:00 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-274900 --log_dir                                     | nospam-274900     | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:00 PDT | 14 Oct 24 07:00 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-274900                                            | nospam-274900     | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:00 PDT | 14 Oct 24 07:00 PDT |
	| start   | -p functional-572000                                        | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:00 PDT | 14 Oct 24 07:03 PDT |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-572000                                        | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:03 PDT | 14 Oct 24 07:05 PDT |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-572000 cache add                                 | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:05 PDT | 14 Oct 24 07:05 PDT |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-572000 cache add                                 | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:05 PDT | 14 Oct 24 07:06 PDT |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-572000 cache add                                 | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:06 PDT | 14 Oct 24 07:06 PDT |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-572000 cache add                                 | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:06 PDT | 14 Oct 24 07:06 PDT |
	|         | minikube-local-cache-test:functional-572000                 |                   |                   |         |                     |                     |
	| cache   | functional-572000 cache delete                              | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:06 PDT | 14 Oct 24 07:06 PDT |
	|         | minikube-local-cache-test:functional-572000                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:06 PDT | 14 Oct 24 07:06 PDT |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:06 PDT | 14 Oct 24 07:06 PDT |
	| ssh     | functional-572000 ssh sudo                                  | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:06 PDT | 14 Oct 24 07:06 PDT |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-572000                                           | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:06 PDT | 14 Oct 24 07:06 PDT |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-572000 ssh                                       | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:06 PDT |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-572000 cache reload                              | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:06 PDT | 14 Oct 24 07:07 PDT |
	| ssh     | functional-572000 ssh                                       | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:07 PDT | 14 Oct 24 07:07 PDT |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:07 PDT | 14 Oct 24 07:07 PDT |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:07 PDT | 14 Oct 24 07:07 PDT |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-572000 kubectl --                                | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:07 PDT | 14 Oct 24 07:07 PDT |
	|         | --context functional-572000                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 07:03:44
	Running on machine: minikube1
	Binary: Built with gc go1.23.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 07:03:44.859760   11452 out.go:345] Setting OutFile to fd 1152 ...
	I1014 07:03:44.861876   11452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:03:44.861876   11452 out.go:358] Setting ErrFile to fd 1356...
	I1014 07:03:44.861876   11452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:03:44.891690   11452 out.go:352] Setting JSON to false
	I1014 07:03:44.897141   11452 start.go:129] hostinfo: {"hostname":"minikube1","uptime":100139,"bootTime":1728814485,"procs":206,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5011 Build 19045.5011","kernelVersion":"10.0.19045.5011 Build 19045.5011","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1014 07:03:44.897415   11452 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:03:44.901851   11452 out.go:177] * [functional-572000] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	I1014 07:03:44.904341   11452 notify.go:220] Checking for updates...
	I1014 07:03:44.906387   11452 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:03:44.909446   11452 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:03:44.911948   11452 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1014 07:03:44.914858   11452 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:03:44.917675   11452 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:03:44.920920   11452 config.go:182] Loaded profile config "functional-572000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:03:44.921711   11452 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:03:50.280094   11452 out.go:177] * Using the hyperv driver based on existing profile
	I1014 07:03:50.284978   11452 start.go:297] selected driver: hyperv
	I1014 07:03:50.285915   11452 start.go:901] validating driver "hyperv" against &{Name:functional-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:functional-572000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.99.72 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:03:50.285915   11452 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:03:50.334960   11452 cni.go:84] Creating CNI manager for ""
	I1014 07:03:50.334960   11452 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 07:03:50.334960   11452 start.go:340] cluster config:
	{Name:functional-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-572000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.99.72 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:03:50.335624   11452 iso.go:125] acquiring lock: {Name:mkb71635bcbccba29dce9048ad4cc71430a7e577 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:03:50.340090   11452 out.go:177] * Starting "functional-572000" primary control-plane node in "functional-572000" cluster
	I1014 07:03:50.342613   11452 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:03:50.342613   11452 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1014 07:03:50.342613   11452 cache.go:56] Caching tarball of preloaded images
	I1014 07:03:50.343364   11452 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1014 07:03:50.343364   11452 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:03:50.343364   11452 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-572000\config.json ...
	I1014 07:03:50.345220   11452 start.go:360] acquireMachinesLock for functional-572000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:03:50.345220   11452 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-572000"
	I1014 07:03:50.346170   11452 start.go:96] Skipping create...Using existing machine configuration
	I1014 07:03:50.346170   11452 fix.go:54] fixHost starting: 
	I1014 07:03:50.346170   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-572000 ).state
	I1014 07:03:52.993911   11452 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:03:52.993911   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:03:52.994010   11452 fix.go:112] recreateIfNeeded on functional-572000: state=Running err=<nil>
	W1014 07:03:52.994086   11452 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 07:03:52.998543   11452 out.go:177] * Updating the running hyperv "functional-572000" VM ...
	I1014 07:03:53.000934   11452 machine.go:93] provisionDockerMachine start ...
	I1014 07:03:53.000934   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-572000 ).state
	I1014 07:03:55.122618   11452 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:03:55.122618   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:03:55.123329   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-572000 ).networkadapters[0]).ipaddresses[0]
	I1014 07:03:57.622793   11452 main.go:141] libmachine: [stdout =====>] : 172.20.99.72
	
	I1014 07:03:57.622793   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:03:57.626787   11452 main.go:141] libmachine: Using SSH client type: native
	I1014 07:03:57.627787   11452 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.99.72 22 <nil> <nil>}
	I1014 07:03:57.627787   11452 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 07:03:57.764571   11452 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-572000
	
	I1014 07:03:57.764653   11452 buildroot.go:166] provisioning hostname "functional-572000"
	I1014 07:03:57.764653   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-572000 ).state
	I1014 07:03:59.867269   11452 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:03:59.867565   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:03:59.867565   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-572000 ).networkadapters[0]).ipaddresses[0]
	I1014 07:04:02.379227   11452 main.go:141] libmachine: [stdout =====>] : 172.20.99.72
	
	I1014 07:04:02.379725   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:04:02.385597   11452 main.go:141] libmachine: Using SSH client type: native
	I1014 07:04:02.385657   11452 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.99.72 22 <nil> <nil>}
	I1014 07:04:02.385657   11452 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-572000 && echo "functional-572000" | sudo tee /etc/hostname
	I1014 07:04:02.541547   11452 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-572000
	
	I1014 07:04:02.541697   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-572000 ).state
	I1014 07:04:04.609387   11452 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:04:04.609583   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:04:04.609583   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-572000 ).networkadapters[0]).ipaddresses[0]
	I1014 07:04:07.097326   11452 main.go:141] libmachine: [stdout =====>] : 172.20.99.72
	
	I1014 07:04:07.097498   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:04:07.102666   11452 main.go:141] libmachine: Using SSH client type: native
	I1014 07:04:07.102666   11452 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.99.72 22 <nil> <nil>}
	I1014 07:04:07.102666   11452 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-572000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-572000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-572000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 07:04:07.240679   11452 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 07:04:07.240679   11452 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1014 07:04:07.240679   11452 buildroot.go:174] setting up certificates
	I1014 07:04:07.240679   11452 provision.go:84] configureAuth start
	I1014 07:04:07.240679   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-572000 ).state
	I1014 07:04:09.354127   11452 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:04:09.354367   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:04:09.354524   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-572000 ).networkadapters[0]).ipaddresses[0]
	I1014 07:04:11.834380   11452 main.go:141] libmachine: [stdout =====>] : 172.20.99.72
	
	I1014 07:04:11.834659   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:04:11.834659   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-572000 ).state
	I1014 07:04:13.904783   11452 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:04:13.904783   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:04:13.904949   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-572000 ).networkadapters[0]).ipaddresses[0]
	I1014 07:04:16.451695   11452 main.go:141] libmachine: [stdout =====>] : 172.20.99.72
	
	I1014 07:04:16.452031   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:04:16.452031   11452 provision.go:143] copyHostCerts
	I1014 07:04:16.452239   11452 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1014 07:04:16.452598   11452 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1014 07:04:16.452598   11452 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1014 07:04:16.453082   11452 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1014 07:04:16.454167   11452 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1014 07:04:16.454373   11452 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1014 07:04:16.454489   11452 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1014 07:04:16.454779   11452 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1014 07:04:16.455747   11452 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1014 07:04:16.456009   11452 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1014 07:04:16.456143   11452 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1014 07:04:16.456497   11452 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I1014 07:04:16.457619   11452 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-572000 san=[127.0.0.1 172.20.99.72 functional-572000 localhost minikube]
	I1014 07:04:16.711033   11452 provision.go:177] copyRemoteCerts
	I1014 07:04:16.723037   11452 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 07:04:16.723037   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-572000 ).state
	I1014 07:04:18.830364   11452 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:04:18.830420   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:04:18.830420   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-572000 ).networkadapters[0]).ipaddresses[0]
	I1014 07:04:21.399304   11452 main.go:141] libmachine: [stdout =====>] : 172.20.99.72
	
	I1014 07:04:21.399390   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:04:21.399667   11452 sshutil.go:53] new ssh client: &{IP:172.20.99.72 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-572000\id_rsa Username:docker}
	I1014 07:04:21.509976   11452 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7869338s)
	I1014 07:04:21.510095   11452 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1014 07:04:21.510480   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 07:04:21.558766   11452 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1014 07:04:21.558880   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 07:04:21.624583   11452 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1014 07:04:21.625165   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 07:04:21.677204   11452 provision.go:87] duration metric: took 14.4363814s to configureAuth
	I1014 07:04:21.677268   11452 buildroot.go:189] setting minikube options for container-runtime
	I1014 07:04:21.678008   11452 config.go:182] Loaded profile config "functional-572000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:04:21.678148   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-572000 ).state
	I1014 07:04:23.805542   11452 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:04:23.805542   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:04:23.806054   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-572000 ).networkadapters[0]).ipaddresses[0]
	I1014 07:04:26.296016   11452 main.go:141] libmachine: [stdout =====>] : 172.20.99.72
	
	I1014 07:04:26.296016   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:04:26.301126   11452 main.go:141] libmachine: Using SSH client type: native
	I1014 07:04:26.301871   11452 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.99.72 22 <nil> <nil>}
	I1014 07:04:26.301871   11452 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1014 07:04:26.441074   11452 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1014 07:04:26.441207   11452 buildroot.go:70] root file system type: tmpfs
	I1014 07:04:26.441548   11452 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1014 07:04:26.441548   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-572000 ).state
	I1014 07:04:28.519470   11452 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:04:28.519470   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:04:28.519470   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-572000 ).networkadapters[0]).ipaddresses[0]
	I1014 07:04:31.012463   11452 main.go:141] libmachine: [stdout =====>] : 172.20.99.72
	
	I1014 07:04:31.012463   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:04:31.018216   11452 main.go:141] libmachine: Using SSH client type: native
	I1014 07:04:31.019086   11452 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.99.72 22 <nil> <nil>}
	I1014 07:04:31.019086   11452 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1014 07:04:31.172215   11452 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1014 07:04:31.172356   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-572000 ).state
	I1014 07:04:33.272124   11452 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:04:33.272808   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:04:33.272808   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-572000 ).networkadapters[0]).ipaddresses[0]
	I1014 07:04:35.794389   11452 main.go:141] libmachine: [stdout =====>] : 172.20.99.72
	
	I1014 07:04:35.794389   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:04:35.800423   11452 main.go:141] libmachine: Using SSH client type: native
	I1014 07:04:35.801035   11452 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.99.72 22 <nil> <nil>}
	I1014 07:04:35.801035   11452 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1014 07:04:35.965306   11452 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 07:04:35.965306   11452 machine.go:96] duration metric: took 42.9643248s to provisionDockerMachine
	I1014 07:04:35.965306   11452 start.go:293] postStartSetup for "functional-572000" (driver="hyperv")
	I1014 07:04:35.965306   11452 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 07:04:35.977430   11452 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 07:04:35.977993   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-572000 ).state
	I1014 07:04:38.087737   11452 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:04:38.087737   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:04:38.087827   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-572000 ).networkadapters[0]).ipaddresses[0]
	I1014 07:04:40.647675   11452 main.go:141] libmachine: [stdout =====>] : 172.20.99.72
	
	I1014 07:04:40.647675   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:04:40.648708   11452 sshutil.go:53] new ssh client: &{IP:172.20.99.72 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-572000\id_rsa Username:docker}
	I1014 07:04:40.764818   11452 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7873832s)
	I1014 07:04:40.777571   11452 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 07:04:40.784944   11452 command_runner.go:130] > NAME=Buildroot
	I1014 07:04:40.784944   11452 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1014 07:04:40.784944   11452 command_runner.go:130] > ID=buildroot
	I1014 07:04:40.784944   11452 command_runner.go:130] > VERSION_ID=2023.02.9
	I1014 07:04:40.784944   11452 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1014 07:04:40.784944   11452 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 07:04:40.784944   11452 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1014 07:04:40.785670   11452 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1014 07:04:40.786418   11452 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> 9362.pem in /etc/ssl/certs
	I1014 07:04:40.786418   11452 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /etc/ssl/certs/9362.pem
	I1014 07:04:40.787829   11452 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\936\hosts -> hosts in /etc/test/nested/copy/936
	I1014 07:04:40.787829   11452 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\936\hosts -> /etc/test/nested/copy/936/hosts
	I1014 07:04:40.801355   11452 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/936
	I1014 07:04:40.832716   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /etc/ssl/certs/9362.pem (1708 bytes)
	I1014 07:04:40.880421   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\936\hosts --> /etc/test/nested/copy/936/hosts (40 bytes)
	I1014 07:04:40.930400   11452 start.go:296] duration metric: took 4.9650878s for postStartSetup
	I1014 07:04:40.930400   11452 fix.go:56] duration metric: took 50.5841733s for fixHost
	I1014 07:04:40.930400   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-572000 ).state
	I1014 07:04:43.054500   11452 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:04:43.054500   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:04:43.054500   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-572000 ).networkadapters[0]).ipaddresses[0]
	I1014 07:04:45.619402   11452 main.go:141] libmachine: [stdout =====>] : 172.20.99.72
	
	I1014 07:04:45.619402   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:04:45.628937   11452 main.go:141] libmachine: Using SSH client type: native
	I1014 07:04:45.629709   11452 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.99.72 22 <nil> <nil>}
	I1014 07:04:45.629709   11452 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 07:04:45.758577   11452 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728914685.755250773
	
	I1014 07:04:45.758577   11452 fix.go:216] guest clock: 1728914685.755250773
	I1014 07:04:45.758577   11452 fix.go:229] Guest: 2024-10-14 07:04:45.755250773 -0700 PDT Remote: 2024-10-14 07:04:40.9304 -0700 PDT m=+56.172978801 (delta=4.824850773s)
	I1014 07:04:45.758577   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-572000 ).state
	I1014 07:04:47.897409   11452 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:04:47.897969   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:04:47.897969   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-572000 ).networkadapters[0]).ipaddresses[0]
	I1014 07:04:50.434663   11452 main.go:141] libmachine: [stdout =====>] : 172.20.99.72
	
	I1014 07:04:50.435165   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:04:50.441158   11452 main.go:141] libmachine: Using SSH client type: native
	I1014 07:04:50.441574   11452 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.99.72 22 <nil> <nil>}
	I1014 07:04:50.441574   11452 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1728914685
	I1014 07:04:50.593553   11452 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 14 14:04:45 UTC 2024
	
	I1014 07:04:50.593553   11452 fix.go:236] clock set: Mon Oct 14 14:04:45 UTC 2024
	 (err=<nil>)
	I1014 07:04:50.593553   11452 start.go:83] releasing machines lock for "functional-572000", held for 1m0.2482661s
	I1014 07:04:50.598939   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-572000 ).state
	I1014 07:04:52.736462   11452 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:04:52.736462   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:04:52.737360   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-572000 ).networkadapters[0]).ipaddresses[0]
	I1014 07:04:55.257456   11452 main.go:141] libmachine: [stdout =====>] : 172.20.99.72
	
	I1014 07:04:55.258496   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:04:55.262663   11452 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1014 07:04:55.262875   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-572000 ).state
	I1014 07:04:55.271933   11452 ssh_runner.go:195] Run: cat /version.json
	I1014 07:04:55.271933   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-572000 ).state
	I1014 07:04:57.432203   11452 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:04:57.432575   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:04:57.432575   11452 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:04:57.432677   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:04:57.432677   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-572000 ).networkadapters[0]).ipaddresses[0]
	I1014 07:04:57.432849   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-572000 ).networkadapters[0]).ipaddresses[0]
	I1014 07:05:00.125108   11452 main.go:141] libmachine: [stdout =====>] : 172.20.99.72
	
	I1014 07:05:00.125229   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:05:00.125559   11452 sshutil.go:53] new ssh client: &{IP:172.20.99.72 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-572000\id_rsa Username:docker}
	I1014 07:05:00.150222   11452 main.go:141] libmachine: [stdout =====>] : 172.20.99.72
	
	I1014 07:05:00.150222   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:05:00.150732   11452 sshutil.go:53] new ssh client: &{IP:172.20.99.72 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-572000\id_rsa Username:docker}
	I1014 07:05:00.228780   11452 command_runner.go:130] > {"iso_version": "v1.34.0-1728382514-19774", "kicbase_version": "v0.0.45-1728063813-19756", "minikube_version": "v1.34.0", "commit": "cf9f11c2b0369efc07a929c4a1fdb2b4b3c62ee9"}
	I1014 07:05:00.228953   11452 ssh_runner.go:235] Completed: cat /version.json: (4.9570148s)
	I1014 07:05:00.240646   11452 ssh_runner.go:195] Run: systemctl --version
	I1014 07:05:00.240646   11452 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I1014 07:05:00.241581   11452 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9788538s)
	W1014 07:05:00.241581   11452 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1014 07:05:00.250857   11452 command_runner.go:130] > systemd 252 (252)
	I1014 07:05:00.250857   11452 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1014 07:05:00.262929   11452 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1014 07:05:00.272368   11452 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1014 07:05:00.272500   11452 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 07:05:00.285814   11452 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 07:05:00.305581   11452 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 07:05:00.305690   11452 start.go:495] detecting cgroup driver to use...
	I1014 07:05:00.306100   11452 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1014 07:05:00.336829   11452 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1014 07:05:00.337864   11452 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1014 07:05:00.348510   11452 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1014 07:05:00.360457   11452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1014 07:05:00.394176   11452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1014 07:05:00.412724   11452 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 07:05:00.424602   11452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 07:05:00.457298   11452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:05:00.495546   11452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 07:05:00.528815   11452 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:05:00.568463   11452 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 07:05:00.598824   11452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 07:05:00.634616   11452 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 07:05:00.665271   11452 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 07:05:00.696492   11452 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 07:05:00.716432   11452 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1014 07:05:00.728336   11452 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 07:05:00.758732   11452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:05:01.013702   11452 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 07:05:01.046840   11452 start.go:495] detecting cgroup driver to use...
	I1014 07:05:01.058227   11452 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1014 07:05:01.081798   11452 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1014 07:05:01.081902   11452 command_runner.go:130] > [Unit]
	I1014 07:05:01.081902   11452 command_runner.go:130] > Description=Docker Application Container Engine
	I1014 07:05:01.081902   11452 command_runner.go:130] > Documentation=https://docs.docker.com
	I1014 07:05:01.081902   11452 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1014 07:05:01.081902   11452 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1014 07:05:01.081902   11452 command_runner.go:130] > StartLimitBurst=3
	I1014 07:05:01.081902   11452 command_runner.go:130] > StartLimitIntervalSec=60
	I1014 07:05:01.081902   11452 command_runner.go:130] > [Service]
	I1014 07:05:01.082009   11452 command_runner.go:130] > Type=notify
	I1014 07:05:01.082009   11452 command_runner.go:130] > Restart=on-failure
	I1014 07:05:01.082009   11452 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1014 07:05:01.082009   11452 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1014 07:05:01.082009   11452 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1014 07:05:01.082009   11452 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1014 07:05:01.082009   11452 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1014 07:05:01.082009   11452 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1014 07:05:01.082009   11452 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1014 07:05:01.082009   11452 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1014 07:05:01.082009   11452 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1014 07:05:01.082138   11452 command_runner.go:130] > ExecStart=
	I1014 07:05:01.082138   11452 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1014 07:05:01.082138   11452 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1014 07:05:01.082248   11452 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1014 07:05:01.082248   11452 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1014 07:05:01.082248   11452 command_runner.go:130] > LimitNOFILE=infinity
	I1014 07:05:01.082248   11452 command_runner.go:130] > LimitNPROC=infinity
	I1014 07:05:01.082248   11452 command_runner.go:130] > LimitCORE=infinity
	I1014 07:05:01.082248   11452 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1014 07:05:01.082322   11452 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1014 07:05:01.082322   11452 command_runner.go:130] > TasksMax=infinity
	I1014 07:05:01.082353   11452 command_runner.go:130] > TimeoutStartSec=0
	I1014 07:05:01.082353   11452 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1014 07:05:01.082353   11452 command_runner.go:130] > Delegate=yes
	I1014 07:05:01.082353   11452 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1014 07:05:01.082353   11452 command_runner.go:130] > KillMode=process
	I1014 07:05:01.082353   11452 command_runner.go:130] > [Install]
	I1014 07:05:01.082353   11452 command_runner.go:130] > WantedBy=multi-user.target
	I1014 07:05:01.094359   11452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:05:01.132014   11452 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 07:05:01.177091   11452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:05:01.215097   11452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:05:01.239886   11452 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:05:01.276410   11452 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1014 07:05:01.287102   11452 ssh_runner.go:195] Run: which cri-dockerd
	I1014 07:05:01.296749   11452 command_runner.go:130] > /usr/bin/cri-dockerd
	I1014 07:05:01.308744   11452 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1014 07:05:01.326348   11452 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1014 07:05:01.370566   11452 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1014 07:05:01.664436   11452 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1014 07:05:01.913818   11452 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1014 07:05:01.913818   11452 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1014 07:05:01.960388   11452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:05:02.240382   11452 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:05:15.122304   11452 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.8819083s)
	I1014 07:05:15.134888   11452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1014 07:05:15.173905   11452 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1014 07:05:15.228233   11452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 07:05:15.267256   11452 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1014 07:05:15.488509   11452 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1014 07:05:15.699170   11452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:05:15.912102   11452 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1014 07:05:15.957682   11452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 07:05:15.994431   11452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:05:16.217292   11452 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1014 07:05:16.382582   11452 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1014 07:05:16.395400   11452 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1014 07:05:16.405967   11452 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1014 07:05:16.405967   11452 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1014 07:05:16.405967   11452 command_runner.go:130] > Device: 0,22	Inode: 1403        Links: 1
	I1014 07:05:16.405967   11452 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1014 07:05:16.405967   11452 command_runner.go:130] > Access: 2024-10-14 14:05:16.243301892 +0000
	I1014 07:05:16.405967   11452 command_runner.go:130] > Modify: 2024-10-14 14:05:16.243301892 +0000
	I1014 07:05:16.405967   11452 command_runner.go:130] > Change: 2024-10-14 14:05:16.247301919 +0000
	I1014 07:05:16.405967   11452 command_runner.go:130] >  Birth: -
	I1014 07:05:16.405967   11452 start.go:563] Will wait 60s for crictl version
	I1014 07:05:16.417928   11452 ssh_runner.go:195] Run: which crictl
	I1014 07:05:16.424298   11452 command_runner.go:130] > /usr/bin/crictl
	I1014 07:05:16.437103   11452 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 07:05:16.499792   11452 command_runner.go:130] > Version:  0.1.0
	I1014 07:05:16.499875   11452 command_runner.go:130] > RuntimeName:  docker
	I1014 07:05:16.499875   11452 command_runner.go:130] > RuntimeVersion:  27.3.1
	I1014 07:05:16.499875   11452 command_runner.go:130] > RuntimeApiVersion:  v1
	I1014 07:05:16.499875   11452 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1014 07:05:16.511371   11452 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 07:05:16.549464   11452 command_runner.go:130] > 27.3.1
	I1014 07:05:16.560790   11452 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 07:05:16.592233   11452 command_runner.go:130] > 27.3.1
	I1014 07:05:16.596382   11452 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1014 07:05:16.596382   11452 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1014 07:05:16.601355   11452 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1014 07:05:16.601355   11452 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1014 07:05:16.601355   11452 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1014 07:05:16.601355   11452 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:08:65:4d Flags:up|broadcast|multicast|running}
	I1014 07:05:16.603754   11452 ip.go:214] interface addr: fe80::b548:ff79:d464:75b8/64
	I1014 07:05:16.604794   11452 ip.go:214] interface addr: 172.20.96.1/20
	I1014 07:05:16.614438   11452 ssh_runner.go:195] Run: grep 172.20.96.1	host.minikube.internal$ /etc/hosts
	I1014 07:05:16.620709   11452 command_runner.go:130] > 172.20.96.1	host.minikube.internal
	I1014 07:05:16.620709   11452 kubeadm.go:883] updating cluster {Name:functional-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.31.1 ClusterName:functional-572000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.99.72 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 07:05:16.621435   11452 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:05:16.632104   11452 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 07:05:16.660755   11452 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I1014 07:05:16.660755   11452 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I1014 07:05:16.660755   11452 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 07:05:16.660755   11452 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I1014 07:05:16.660755   11452 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I1014 07:05:16.660755   11452 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I1014 07:05:16.660755   11452 command_runner.go:130] > registry.k8s.io/pause:3.10
	I1014 07:05:16.660755   11452 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:05:16.660755   11452 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1014 07:05:16.660755   11452 docker.go:619] Images already preloaded, skipping extraction
	I1014 07:05:16.671092   11452 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 07:05:16.698250   11452 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I1014 07:05:16.698302   11452 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I1014 07:05:16.698353   11452 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 07:05:16.698353   11452 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I1014 07:05:16.698353   11452 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I1014 07:05:16.698353   11452 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I1014 07:05:16.698353   11452 command_runner.go:130] > registry.k8s.io/pause:3.10
	I1014 07:05:16.698353   11452 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:05:16.698353   11452 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1014 07:05:16.698353   11452 cache_images.go:84] Images are preloaded, skipping loading
	I1014 07:05:16.698353   11452 kubeadm.go:934] updating node { 172.20.99.72 8441 v1.31.1 docker true true} ...
	I1014 07:05:16.698353   11452 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-572000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.99.72
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:functional-572000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 07:05:16.707955   11452 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1014 07:05:16.772994   11452 command_runner.go:130] > cgroupfs
	I1014 07:05:16.773096   11452 cni.go:84] Creating CNI manager for ""
	I1014 07:05:16.773096   11452 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 07:05:16.773096   11452 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 07:05:16.773096   11452 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.99.72 APIServerPort:8441 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-572000 NodeName:functional-572000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.99.72"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.99.72 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 07:05:16.773581   11452 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.99.72
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-572000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.20.99.72"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.99.72"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 07:05:16.785940   11452 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 07:05:16.804674   11452 command_runner.go:130] > kubeadm
	I1014 07:05:16.804674   11452 command_runner.go:130] > kubectl
	I1014 07:05:16.804674   11452 command_runner.go:130] > kubelet
	I1014 07:05:16.804881   11452 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 07:05:16.816085   11452 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 07:05:16.835057   11452 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1014 07:05:16.867201   11452 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 07:05:16.897718   11452 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1014 07:05:16.942957   11452 ssh_runner.go:195] Run: grep 172.20.99.72	control-plane.minikube.internal$ /etc/hosts
	I1014 07:05:16.949443   11452 command_runner.go:130] > 172.20.99.72	control-plane.minikube.internal
	I1014 07:05:16.961108   11452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:05:17.182896   11452 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 07:05:17.213467   11452 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-572000 for IP: 172.20.99.72
	I1014 07:05:17.213467   11452 certs.go:194] generating shared ca certs ...
	I1014 07:05:17.213467   11452 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:05:17.214713   11452 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I1014 07:05:17.214903   11452 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I1014 07:05:17.214903   11452 certs.go:256] generating profile certs ...
	I1014 07:05:17.216069   11452 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-572000\client.key
	I1014 07:05:17.216491   11452 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-572000\apiserver.key.c54c45c3
	I1014 07:05:17.216739   11452 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-572000\proxy-client.key
	I1014 07:05:17.216739   11452 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 07:05:17.216739   11452 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1014 07:05:17.216739   11452 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 07:05:17.217458   11452 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 07:05:17.217658   11452 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-572000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 07:05:17.217857   11452 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-572000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 07:05:17.218007   11452 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-572000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 07:05:17.218252   11452 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-572000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 07:05:17.218566   11452 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem (1338 bytes)
	W1014 07:05:17.219147   11452 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936_empty.pem, impossibly tiny 0 bytes
	I1014 07:05:17.219196   11452 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1014 07:05:17.219584   11452 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1014 07:05:17.219973   11452 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1014 07:05:17.220324   11452 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1014 07:05:17.220445   11452 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem (1708 bytes)
	I1014 07:05:17.220992   11452 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem -> /usr/share/ca-certificates/936.pem
	I1014 07:05:17.221150   11452 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /usr/share/ca-certificates/9362.pem
	I1014 07:05:17.221297   11452 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:05:17.222672   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 07:05:17.275995   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 07:05:17.319561   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 07:05:17.364964   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 07:05:17.409186   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-572000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 07:05:17.455100   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-572000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 07:05:17.499289   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-572000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 07:05:17.544181   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-572000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 07:05:17.594802   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem --> /usr/share/ca-certificates/936.pem (1338 bytes)
	I1014 07:05:17.639873   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /usr/share/ca-certificates/9362.pem (1708 bytes)
	I1014 07:05:17.684819   11452 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 07:05:17.727666   11452 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 07:05:17.773472   11452 ssh_runner.go:195] Run: openssl version
	I1014 07:05:17.781777   11452 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1014 07:05:17.792873   11452 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936.pem && ln -fs /usr/share/ca-certificates/936.pem /etc/ssl/certs/936.pem"
	I1014 07:05:17.824042   11452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936.pem
	I1014 07:05:17.832596   11452 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 14 14:00 /usr/share/ca-certificates/936.pem
	I1014 07:05:17.832685   11452 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 14:00 /usr/share/ca-certificates/936.pem
	I1014 07:05:17.843668   11452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936.pem
	I1014 07:05:17.852534   11452 command_runner.go:130] > 51391683
	I1014 07:05:17.863635   11452 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/936.pem /etc/ssl/certs/51391683.0"
	I1014 07:05:17.892978   11452 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9362.pem && ln -fs /usr/share/ca-certificates/9362.pem /etc/ssl/certs/9362.pem"
	I1014 07:05:17.924738   11452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9362.pem
	I1014 07:05:17.931976   11452 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 14 14:00 /usr/share/ca-certificates/9362.pem
	I1014 07:05:17.932827   11452 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 14:00 /usr/share/ca-certificates/9362.pem
	I1014 07:05:17.943897   11452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9362.pem
	I1014 07:05:17.953214   11452 command_runner.go:130] > 3ec20f2e
	I1014 07:05:17.964527   11452 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9362.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 07:05:17.997124   11452 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 07:05:18.028645   11452 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:05:18.036963   11452 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 14 13:44 /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:05:18.037063   11452 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:44 /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:05:18.048899   11452 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:05:18.058316   11452 command_runner.go:130] > b5213941
	I1014 07:05:18.070195   11452 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 07:05:18.100797   11452 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 07:05:18.108022   11452 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 07:05:18.108058   11452 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1014 07:05:18.108097   11452 command_runner.go:130] > Device: 8,1	Inode: 1052967     Links: 1
	I1014 07:05:18.108097   11452 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1014 07:05:18.108097   11452 command_runner.go:130] > Access: 2024-10-14 14:03:18.535188906 +0000
	I1014 07:05:18.108097   11452 command_runner.go:130] > Modify: 2024-10-14 14:03:18.535188906 +0000
	I1014 07:05:18.108097   11452 command_runner.go:130] > Change: 2024-10-14 14:03:18.535188906 +0000
	I1014 07:05:18.108097   11452 command_runner.go:130] >  Birth: 2024-10-14 14:03:18.535188906 +0000
	I1014 07:05:18.119158   11452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 07:05:18.128564   11452 command_runner.go:130] > Certificate will not expire
	I1014 07:05:18.140714   11452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 07:05:18.148956   11452 command_runner.go:130] > Certificate will not expire
	I1014 07:05:18.159832   11452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 07:05:18.168605   11452 command_runner.go:130] > Certificate will not expire
	I1014 07:05:18.179809   11452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 07:05:18.188144   11452 command_runner.go:130] > Certificate will not expire
	I1014 07:05:18.199937   11452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 07:05:18.208605   11452 command_runner.go:130] > Certificate will not expire
	I1014 07:05:18.222459   11452 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 07:05:18.231154   11452 command_runner.go:130] > Certificate will not expire
	I1014 07:05:18.231637   11452 kubeadm.go:392] StartCluster: {Name:functional-572000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:functional-572000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.99.72 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:05:18.243003   11452 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1014 07:05:18.277159   11452 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 07:05:18.295756   11452 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1014 07:05:18.295756   11452 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1014 07:05:18.295756   11452 command_runner.go:130] > /var/lib/minikube/etcd:
	I1014 07:05:18.295756   11452 command_runner.go:130] > member
	I1014 07:05:18.295756   11452 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 07:05:18.295756   11452 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 07:05:18.308367   11452 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 07:05:18.325746   11452 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 07:05:18.326995   11452 kubeconfig.go:125] found "functional-572000" server: "https://172.20.99.72:8441"
	I1014 07:05:18.328308   11452 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:05:18.328738   11452 kapi.go:59] client config for functional-572000: &rest.Config{Host:"https://172.20.99.72:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2926ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 07:05:18.330388   11452 cert_rotation.go:140] Starting client certificate rotation controller
	I1014 07:05:18.339880   11452 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 07:05:18.358248   11452 kubeadm.go:630] The running cluster does not require reconfiguration: 172.20.99.72
	I1014 07:05:18.358381   11452 kubeadm.go:1160] stopping kube-system containers ...
	I1014 07:05:18.370438   11452 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1014 07:05:18.398715   11452 command_runner.go:130] > de9a77df3f3e
	I1014 07:05:18.399334   11452 command_runner.go:130] > 230f6f168379
	I1014 07:05:18.399334   11452 command_runner.go:130] > 2856135e1f5d
	I1014 07:05:18.399334   11452 command_runner.go:130] > 0cf202c76d50
	I1014 07:05:18.399334   11452 command_runner.go:130] > d2fc2b8d7f0c
	I1014 07:05:18.399334   11452 command_runner.go:130] > 44ccf69831a7
	I1014 07:05:18.399334   11452 command_runner.go:130] > 9d20c3e8340d
	I1014 07:05:18.399334   11452 command_runner.go:130] > 6eafcec504e7
	I1014 07:05:18.399334   11452 command_runner.go:130] > b3106c84c81f
	I1014 07:05:18.399334   11452 command_runner.go:130] > 44129de29131
	I1014 07:05:18.399334   11452 command_runner.go:130] > 19354c3ceec6
	I1014 07:05:18.399334   11452 command_runner.go:130] > 9b7c8a6fcd73
	I1014 07:05:18.399334   11452 command_runner.go:130] > 040754d37cd0
	I1014 07:05:18.399334   11452 command_runner.go:130] > 87b680ba1c14
	I1014 07:05:18.399334   11452 command_runner.go:130] > 0ae43ff1c115
	I1014 07:05:18.399461   11452 docker.go:483] Stopping containers: [de9a77df3f3e 230f6f168379 2856135e1f5d 0cf202c76d50 d2fc2b8d7f0c 44ccf69831a7 9d20c3e8340d 6eafcec504e7 b3106c84c81f 44129de29131 19354c3ceec6 9b7c8a6fcd73 040754d37cd0 87b680ba1c14 0ae43ff1c115]
	I1014 07:05:18.410382   11452 ssh_runner.go:195] Run: docker stop de9a77df3f3e 230f6f168379 2856135e1f5d 0cf202c76d50 d2fc2b8d7f0c 44ccf69831a7 9d20c3e8340d 6eafcec504e7 b3106c84c81f 44129de29131 19354c3ceec6 9b7c8a6fcd73 040754d37cd0 87b680ba1c14 0ae43ff1c115
	I1014 07:05:18.437472   11452 command_runner.go:130] > de9a77df3f3e
	I1014 07:05:18.437534   11452 command_runner.go:130] > 230f6f168379
	I1014 07:05:18.437534   11452 command_runner.go:130] > 2856135e1f5d
	I1014 07:05:18.437534   11452 command_runner.go:130] > 0cf202c76d50
	I1014 07:05:18.437534   11452 command_runner.go:130] > d2fc2b8d7f0c
	I1014 07:05:18.437534   11452 command_runner.go:130] > 44ccf69831a7
	I1014 07:05:18.437680   11452 command_runner.go:130] > 9d20c3e8340d
	I1014 07:05:18.437680   11452 command_runner.go:130] > 6eafcec504e7
	I1014 07:05:18.437735   11452 command_runner.go:130] > b3106c84c81f
	I1014 07:05:18.437776   11452 command_runner.go:130] > 44129de29131
	I1014 07:05:18.437776   11452 command_runner.go:130] > 19354c3ceec6
	I1014 07:05:18.437776   11452 command_runner.go:130] > 9b7c8a6fcd73
	I1014 07:05:18.437776   11452 command_runner.go:130] > 040754d37cd0
	I1014 07:05:18.437776   11452 command_runner.go:130] > 87b680ba1c14
	I1014 07:05:18.437889   11452 command_runner.go:130] > 0ae43ff1c115
	I1014 07:05:18.451829   11452 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 07:05:18.533885   11452 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 07:05:18.553263   11452 command_runner.go:130] > -rw------- 1 root root 5651 Oct 14 14:03 /etc/kubernetes/admin.conf
	I1014 07:05:18.553263   11452 command_runner.go:130] > -rw------- 1 root root 5652 Oct 14 14:03 /etc/kubernetes/controller-manager.conf
	I1014 07:05:18.553441   11452 command_runner.go:130] > -rw------- 1 root root 2007 Oct 14 14:03 /etc/kubernetes/kubelet.conf
	I1014 07:05:18.553441   11452 command_runner.go:130] > -rw------- 1 root root 5600 Oct 14 14:03 /etc/kubernetes/scheduler.conf
	I1014 07:05:18.553441   11452 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Oct 14 14:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Oct 14 14:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Oct 14 14:03 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Oct 14 14:03 /etc/kubernetes/scheduler.conf
	
	I1014 07:05:18.566496   11452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1014 07:05:18.584844   11452 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I1014 07:05:18.596993   11452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1014 07:05:18.613019   11452 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I1014 07:05:18.629391   11452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1014 07:05:18.646922   11452 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1014 07:05:18.658664   11452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 07:05:18.686343   11452 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1014 07:05:18.704414   11452 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1014 07:05:18.715237   11452 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 07:05:18.743670   11452 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 07:05:18.761359   11452 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 07:05:18.834021   11452 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 07:05:18.834021   11452 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1014 07:05:18.834021   11452 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1014 07:05:18.834021   11452 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 07:05:18.834021   11452 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1014 07:05:18.834021   11452 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1014 07:05:18.834021   11452 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1014 07:05:18.834021   11452 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1014 07:05:18.834021   11452 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1014 07:05:18.834021   11452 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 07:05:18.834021   11452 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 07:05:18.834021   11452 command_runner.go:130] > [certs] Using the existing "sa" key
	I1014 07:05:18.834021   11452 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 07:05:20.980072   11452 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 07:05:20.980072   11452 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I1014 07:05:20.980072   11452 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
	I1014 07:05:20.980072   11452 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I1014 07:05:20.980072   11452 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 07:05:20.980072   11452 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 07:05:20.980072   11452 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.1460488s)
	I1014 07:05:20.980072   11452 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 07:05:21.297422   11452 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 07:05:21.297422   11452 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 07:05:21.297422   11452 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1014 07:05:21.297422   11452 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 07:05:21.397554   11452 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 07:05:21.397554   11452 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 07:05:21.397554   11452 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 07:05:21.397554   11452 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 07:05:21.397554   11452 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 07:05:21.491413   11452 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 07:05:21.491413   11452 api_server.go:52] waiting for apiserver process to appear ...
	I1014 07:05:21.504398   11452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 07:05:22.006412   11452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 07:05:22.504631   11452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 07:05:23.005486   11452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 07:05:23.505193   11452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 07:05:23.535211   11452 command_runner.go:130] > 4839
	I1014 07:05:23.535255   11452 api_server.go:72] duration metric: took 2.0438401s to wait for apiserver process to appear ...
	I1014 07:05:23.535348   11452 api_server.go:88] waiting for apiserver healthz status ...
	I1014 07:05:23.535404   11452 api_server.go:253] Checking apiserver healthz at https://172.20.99.72:8441/healthz ...
	I1014 07:05:26.226980   11452 api_server.go:279] https://172.20.99.72:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 07:05:26.227086   11452 api_server.go:103] status: https://172.20.99.72:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 07:05:26.227124   11452 api_server.go:253] Checking apiserver healthz at https://172.20.99.72:8441/healthz ...
	I1014 07:05:26.322845   11452 api_server.go:279] https://172.20.99.72:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 07:05:26.322845   11452 api_server.go:103] status: https://172.20.99.72:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 07:05:26.535849   11452 api_server.go:253] Checking apiserver healthz at https://172.20.99.72:8441/healthz ...
	I1014 07:05:26.552794   11452 api_server.go:279] https://172.20.99.72:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 07:05:26.552794   11452 api_server.go:103] status: https://172.20.99.72:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 07:05:27.036362   11452 api_server.go:253] Checking apiserver healthz at https://172.20.99.72:8441/healthz ...
	I1014 07:05:27.043628   11452 api_server.go:279] https://172.20.99.72:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 07:05:27.043628   11452 api_server.go:103] status: https://172.20.99.72:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 07:05:27.536076   11452 api_server.go:253] Checking apiserver healthz at https://172.20.99.72:8441/healthz ...
	I1014 07:05:27.546230   11452 api_server.go:279] https://172.20.99.72:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 07:05:27.546230   11452 api_server.go:103] status: https://172.20.99.72:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 07:05:28.036015   11452 api_server.go:253] Checking apiserver healthz at https://172.20.99.72:8441/healthz ...
	I1014 07:05:28.045632   11452 api_server.go:279] https://172.20.99.72:8441/healthz returned 200:
	ok
	I1014 07:05:28.045632   11452 round_trippers.go:463] GET https://172.20.99.72:8441/version
	I1014 07:05:28.045632   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:28.045632   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:28.045632   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:28.061273   11452 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1014 07:05:28.061273   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:28.061273   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:28.061273   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:28.061273   11452 round_trippers.go:580]     Content-Length: 263
	I1014 07:05:28.061273   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:28 GMT
	I1014 07:05:28.061273   11452 round_trippers.go:580]     Audit-Id: 17011327-ad90-4fcd-a2b2-7fca6361b614
	I1014 07:05:28.061273   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:28.061273   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:28.061273   11452 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1014 07:05:28.061273   11452 api_server.go:141] control plane version: v1.31.1
	I1014 07:05:28.061818   11452 api_server.go:131] duration metric: took 4.5259204s to wait for apiserver health ...
	I1014 07:05:28.061818   11452 cni.go:84] Creating CNI manager for ""
	I1014 07:05:28.061818   11452 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 07:05:28.065812   11452 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 07:05:28.078652   11452 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 07:05:28.101953   11452 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 07:05:28.135163   11452 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 07:05:28.135163   11452 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 07:05:28.135163   11452 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 07:05:28.135163   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods
	I1014 07:05:28.135163   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:28.135163   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:28.135163   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:28.152430   11452 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1014 07:05:28.152430   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:28.152430   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:28.152430   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:28 GMT
	I1014 07:05:28.152430   11452 round_trippers.go:580]     Audit-Id: ed0b84ef-b1e0-4fdd-b482-36d27fa7fddd
	I1014 07:05:28.152430   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:28.152554   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:28.152554   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:28.163105   11452 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"477"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-pst6d","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0b49ac95-9a84-453c-8a36-f2e2eb5d257a","resourceVersion":"474","creationTimestamp":"2024-10-14T14:03:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"c05f8713-7390-40d7-8e74-6872459faf55","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c05f8713-7390-40d7-8e74-6872459faf55\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52601 chars]
	I1014 07:05:28.169265   11452 system_pods.go:59] 7 kube-system pods found
	I1014 07:05:28.169892   11452 system_pods.go:61] "coredns-7c65d6cfc9-pst6d" [0b49ac95-9a84-453c-8a36-f2e2eb5d257a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 07:05:28.169892   11452 system_pods.go:61] "etcd-functional-572000" [0aaa43a1-8d67-4228-963a-de1b151d9420] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 07:05:28.169892   11452 system_pods.go:61] "kube-apiserver-functional-572000" [cfc32caa-722d-454e-a73d-2251e1353c91] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 07:05:28.169892   11452 system_pods.go:61] "kube-controller-manager-functional-572000" [9a671282-d3b8-4340-a751-e1c512a28d41] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 07:05:28.169892   11452 system_pods.go:61] "kube-proxy-5z6cf" [b623a456-b6e3-47a3-babf-66d8698f5a58] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1014 07:05:28.169989   11452 system_pods.go:61] "kube-scheduler-functional-572000" [3ba63999-5552-434c-92ef-a0adeba9bc26] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 07:05:28.169989   11452 system_pods.go:61] "storage-provisioner" [8ec5c3c6-2441-47cb-9862-e6c87bce62c2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 07:05:28.169989   11452 system_pods.go:74] duration metric: took 34.8257ms to wait for pod list to return data ...
	I1014 07:05:28.169989   11452 node_conditions.go:102] verifying NodePressure condition ...
	I1014 07:05:28.169989   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes
	I1014 07:05:28.169989   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:28.169989   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:28.169989   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:28.181065   11452 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1014 07:05:28.181065   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:28.181065   11452 round_trippers.go:580]     Audit-Id: 8e317a29-52c1-488e-884b-ad3569ed33ec
	I1014 07:05:28.181065   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:28.181065   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:28.181065   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:28.181065   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:28.181065   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:28 GMT
	I1014 07:05:28.181065   11452 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"478"},"items":[{"metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4838 chars]
	I1014 07:05:28.182601   11452 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 07:05:28.182654   11452 node_conditions.go:123] node cpu capacity is 2
	I1014 07:05:28.182699   11452 node_conditions.go:105] duration metric: took 12.7104ms to run NodePressure ...
	I1014 07:05:28.182783   11452 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 07:05:28.560507   11452 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1014 07:05:28.560507   11452 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1014 07:05:28.560507   11452 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1014 07:05:28.560507   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I1014 07:05:28.560507   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:28.560507   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:28.560507   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:28.564917   11452 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 07:05:28.564917   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:28.564917   11452 round_trippers.go:580]     Audit-Id: 49613291-d640-4310-bd39-5ae3529fbb16
	I1014 07:05:28.564917   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:28.564917   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:28.564917   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:28.564917   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:28.564917   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:28 GMT
	I1014 07:05:28.565592   11452 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"480"},"items":[{"metadata":{"name":"etcd-functional-572000","namespace":"kube-system","uid":"0aaa43a1-8d67-4228-963a-de1b151d9420","resourceVersion":"468","creationTimestamp":"2024-10-14T14:03:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.99.72:2379","kubernetes.io/config.hash":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.mirror":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.seen":"2024-10-14T14:03:23.002167709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 31232 chars]
	I1014 07:05:28.567387   11452 kubeadm.go:739] kubelet initialised
	I1014 07:05:28.567433   11452 kubeadm.go:740] duration metric: took 6.9259ms waiting for restarted kubelet to initialise ...
	I1014 07:05:28.567433   11452 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 07:05:28.567619   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods
	I1014 07:05:28.567665   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:28.567706   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:28.567706   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:28.573461   11452 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 07:05:28.573461   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:28.573503   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:28 GMT
	I1014 07:05:28.573503   11452 round_trippers.go:580]     Audit-Id: 495476bd-5294-4b75-8eec-fb75a08fbffb
	I1014 07:05:28.573503   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:28.573503   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:28.573503   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:28.573503   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:28.574956   11452 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"480"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-pst6d","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0b49ac95-9a84-453c-8a36-f2e2eb5d257a","resourceVersion":"474","creationTimestamp":"2024-10-14T14:03:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"c05f8713-7390-40d7-8e74-6872459faf55","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c05f8713-7390-40d7-8e74-6872459faf55\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52601 chars]
	I1014 07:05:28.577658   11452 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-pst6d" in "kube-system" namespace to be "Ready" ...
	I1014 07:05:28.577658   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-pst6d
	I1014 07:05:28.577658   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:28.577658   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:28.577658   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:28.581563   11452 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 07:05:28.581563   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:28.581563   11452 round_trippers.go:580]     Audit-Id: d1b7924c-4ccf-459d-824e-9c2cd3522e7a
	I1014 07:05:28.581563   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:28.581563   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:28.581563   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:28.581563   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:28.581563   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:28 GMT
	I1014 07:05:28.581563   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-pst6d","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0b49ac95-9a84-453c-8a36-f2e2eb5d257a","resourceVersion":"474","creationTimestamp":"2024-10-14T14:03:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"c05f8713-7390-40d7-8e74-6872459faf55","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c05f8713-7390-40d7-8e74-6872459faf55\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6757 chars]
	I1014 07:05:28.582495   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:28.582495   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:28.582495   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:28.582495   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:28.585940   11452 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 07:05:28.585940   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:28.585940   11452 round_trippers.go:580]     Audit-Id: 8783caf9-3779-44c0-85cb-f8df1dfb1f2a
	I1014 07:05:28.585940   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:28.585940   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:28.585940   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:28.585940   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:28.585940   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:28 GMT
	I1014 07:05:28.585940   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:29.078859   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-pst6d
	I1014 07:05:29.078859   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:29.078859   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:29.078859   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:29.083025   11452 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 07:05:29.083025   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:29.083025   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:29.083025   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:29.083025   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:29 GMT
	I1014 07:05:29.083025   11452 round_trippers.go:580]     Audit-Id: 866b1f88-6c55-4c9c-a3b7-1cda03d46d8d
	I1014 07:05:29.083025   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:29.083025   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:29.083025   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-pst6d","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0b49ac95-9a84-453c-8a36-f2e2eb5d257a","resourceVersion":"483","creationTimestamp":"2024-10-14T14:03:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"c05f8713-7390-40d7-8e74-6872459faf55","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c05f8713-7390-40d7-8e74-6872459faf55\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6704 chars]
	I1014 07:05:29.084376   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:29.084610   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:29.084610   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:29.084610   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:29.088817   11452 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 07:05:29.088817   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:29.088817   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:29.088942   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:29.088942   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:29 GMT
	I1014 07:05:29.088942   11452 round_trippers.go:580]     Audit-Id: ade7a593-e3f8-455d-b5b5-d1af764e7604
	I1014 07:05:29.088942   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:29.088942   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:29.089337   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:29.089913   11452 pod_ready.go:93] pod "coredns-7c65d6cfc9-pst6d" in "kube-system" namespace has status "Ready":"True"
	I1014 07:05:29.089994   11452 pod_ready.go:82] duration metric: took 512.2551ms for pod "coredns-7c65d6cfc9-pst6d" in "kube-system" namespace to be "Ready" ...
	I1014 07:05:29.090015   11452 pod_ready.go:79] waiting up to 4m0s for pod "etcd-functional-572000" in "kube-system" namespace to be "Ready" ...
	I1014 07:05:29.090164   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/etcd-functional-572000
	I1014 07:05:29.090190   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:29.090190   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:29.090190   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:29.092940   11452 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 07:05:29.092996   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:29.092996   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:29.092996   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:29 GMT
	I1014 07:05:29.093038   11452 round_trippers.go:580]     Audit-Id: 4d9e54d1-f315-4b25-82d9-89771ff3c404
	I1014 07:05:29.093038   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:29.093038   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:29.093038   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:29.093191   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-572000","namespace":"kube-system","uid":"0aaa43a1-8d67-4228-963a-de1b151d9420","resourceVersion":"468","creationTimestamp":"2024-10-14T14:03:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.99.72:2379","kubernetes.io/config.hash":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.mirror":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.seen":"2024-10-14T14:03:23.002167709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6892 chars]
	I1014 07:05:29.094229   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:29.094291   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:29.094291   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:29.094291   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:29.096810   11452 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 07:05:29.096810   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:29.096810   11452 round_trippers.go:580]     Audit-Id: ca779643-5975-4ef0-8014-0185508d4092
	I1014 07:05:29.096810   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:29.096810   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:29.096810   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:29.096810   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:29.096810   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:29 GMT
	I1014 07:05:29.097824   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:29.590841   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/etcd-functional-572000
	I1014 07:05:29.590841   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:29.590841   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:29.590841   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:29.595326   11452 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 07:05:29.595437   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:29.595437   11452 round_trippers.go:580]     Audit-Id: fbe12ea8-f88f-43ab-93ea-17e5a8923514
	I1014 07:05:29.595437   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:29.595437   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:29.595437   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:29.595437   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:29.595437   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:29 GMT
	I1014 07:05:29.595741   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-572000","namespace":"kube-system","uid":"0aaa43a1-8d67-4228-963a-de1b151d9420","resourceVersion":"468","creationTimestamp":"2024-10-14T14:03:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.99.72:2379","kubernetes.io/config.hash":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.mirror":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.seen":"2024-10-14T14:03:23.002167709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6892 chars]
	I1014 07:05:29.596304   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:29.596304   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:29.596304   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:29.596304   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:29.607938   11452 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1014 07:05:29.607938   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:29.607938   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:29.607938   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:29.607938   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:29.607938   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:29 GMT
	I1014 07:05:29.607938   11452 round_trippers.go:580]     Audit-Id: 4f1e9c1d-9206-4886-b425-eb204aceae62
	I1014 07:05:29.607938   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:29.607938   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:30.090762   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/etcd-functional-572000
	I1014 07:05:30.090890   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:30.090890   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:30.090890   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:30.095362   11452 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 07:05:30.095439   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:30.095439   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:30.095439   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:30 GMT
	I1014 07:05:30.095439   11452 round_trippers.go:580]     Audit-Id: 5bf4aa55-66d0-4344-b325-7a5c97fa2203
	I1014 07:05:30.095439   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:30.095439   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:30.095439   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:30.095824   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-572000","namespace":"kube-system","uid":"0aaa43a1-8d67-4228-963a-de1b151d9420","resourceVersion":"468","creationTimestamp":"2024-10-14T14:03:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.99.72:2379","kubernetes.io/config.hash":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.mirror":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.seen":"2024-10-14T14:03:23.002167709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6892 chars]
	I1014 07:05:30.096605   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:30.096660   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:30.096716   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:30.096716   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:30.099806   11452 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 07:05:30.099873   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:30.099873   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:30.099873   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:30.099873   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:30.099873   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:30.099873   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:30 GMT
	I1014 07:05:30.099873   11452 round_trippers.go:580]     Audit-Id: 75b962c8-c23b-4ed2-808f-6bebfe15f905
	I1014 07:05:30.100367   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:30.590900   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/etcd-functional-572000
	I1014 07:05:30.590900   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:30.590900   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:30.590900   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:30.595004   11452 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 07:05:30.595004   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:30.596005   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:30.596005   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:30.596005   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:30.596005   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:30.596052   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:30 GMT
	I1014 07:05:30.596052   11452 round_trippers.go:580]     Audit-Id: 73a06da8-1c31-43b6-a540-4b034ba2d4ca
	I1014 07:05:30.596260   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-572000","namespace":"kube-system","uid":"0aaa43a1-8d67-4228-963a-de1b151d9420","resourceVersion":"468","creationTimestamp":"2024-10-14T14:03:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.99.72:2379","kubernetes.io/config.hash":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.mirror":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.seen":"2024-10-14T14:03:23.002167709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6892 chars]
	I1014 07:05:30.597027   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:30.597027   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:30.597027   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:30.597027   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:30.600563   11452 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 07:05:30.600563   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:30.600563   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:30.600563   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:30 GMT
	I1014 07:05:30.600563   11452 round_trippers.go:580]     Audit-Id: 251074b3-28f7-4f6a-b8fe-96cc4a4db9da
	I1014 07:05:30.600563   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:30.600563   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:30.600563   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:30.601106   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:31.090193   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/etcd-functional-572000
	I1014 07:05:31.090193   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:31.090193   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:31.090193   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:31.095574   11452 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 07:05:31.095642   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:31.095642   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:31.095642   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:31.095642   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:31.095642   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:31.095642   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:31 GMT
	I1014 07:05:31.095642   11452 round_trippers.go:580]     Audit-Id: 86034c2b-58bf-4452-9ee1-4543f9d738df
	I1014 07:05:31.095920   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-572000","namespace":"kube-system","uid":"0aaa43a1-8d67-4228-963a-de1b151d9420","resourceVersion":"468","creationTimestamp":"2024-10-14T14:03:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.99.72:2379","kubernetes.io/config.hash":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.mirror":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.seen":"2024-10-14T14:03:23.002167709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6892 chars]
	I1014 07:05:31.096629   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:31.096629   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:31.096629   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:31.096629   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:31.100925   11452 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 07:05:31.100925   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:31.100925   11452 round_trippers.go:580]     Audit-Id: bb54453f-ad20-416b-94d9-e8f0cd7c3669
	I1014 07:05:31.100925   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:31.100925   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:31.100925   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:31.100925   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:31.100925   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:31 GMT
	I1014 07:05:31.101638   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:31.101869   11452 pod_ready.go:103] pod "etcd-functional-572000" in "kube-system" namespace has status "Ready":"False"
	I1014 07:05:31.590462   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/etcd-functional-572000
	I1014 07:05:31.590462   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:31.590462   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:31.590462   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:31.594762   11452 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 07:05:31.594852   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:31.594852   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:31.594923   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:31.594923   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:31.594923   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:31.594923   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:31 GMT
	I1014 07:05:31.594923   11452 round_trippers.go:580]     Audit-Id: 3d5c8dbf-9de9-4f8c-9ddf-1ea24fc4c52d
	I1014 07:05:31.595144   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-572000","namespace":"kube-system","uid":"0aaa43a1-8d67-4228-963a-de1b151d9420","resourceVersion":"468","creationTimestamp":"2024-10-14T14:03:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.99.72:2379","kubernetes.io/config.hash":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.mirror":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.seen":"2024-10-14T14:03:23.002167709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6892 chars]
	I1014 07:05:31.595900   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:31.595977   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:31.595977   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:31.595977   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:31.599960   11452 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 07:05:31.599960   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:31.599960   11452 round_trippers.go:580]     Audit-Id: ec72c5d7-b241-4d80-909b-6c0fe166b024
	I1014 07:05:31.599960   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:31.599960   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:31.599960   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:31.599960   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:31.599960   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:31 GMT
	I1014 07:05:31.599960   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:32.090280   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/etcd-functional-572000
	I1014 07:05:32.090280   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:32.090280   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:32.090280   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:32.094937   11452 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 07:05:32.094937   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:32.094937   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:32.094937   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:32.094937   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:32.095057   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:32 GMT
	I1014 07:05:32.095057   11452 round_trippers.go:580]     Audit-Id: cd1284af-b7f7-4584-9a87-a12a1f6edb0d
	I1014 07:05:32.095057   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:32.095161   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-572000","namespace":"kube-system","uid":"0aaa43a1-8d67-4228-963a-de1b151d9420","resourceVersion":"468","creationTimestamp":"2024-10-14T14:03:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.99.72:2379","kubernetes.io/config.hash":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.mirror":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.seen":"2024-10-14T14:03:23.002167709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6892 chars]
	I1014 07:05:32.096263   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:32.096263   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:32.096369   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:32.096369   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:32.098626   11452 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 07:05:32.098819   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:32.098819   11452 round_trippers.go:580]     Audit-Id: 0b64062a-c3b9-41f1-b7ad-8fd74b2b122a
	I1014 07:05:32.098819   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:32.098819   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:32.098819   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:32.098819   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:32.098819   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:32 GMT
	I1014 07:05:32.099147   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:32.590344   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/etcd-functional-572000
	I1014 07:05:32.590344   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:32.590344   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:32.590344   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:32.604928   11452 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1014 07:05:32.605118   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:32.605118   11452 round_trippers.go:580]     Audit-Id: 0da0876e-fa6c-4953-8c0a-2eefd45571b2
	I1014 07:05:32.605118   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:32.605186   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:32.605186   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:32.605186   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:32.605186   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:32 GMT
	I1014 07:05:32.605416   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-572000","namespace":"kube-system","uid":"0aaa43a1-8d67-4228-963a-de1b151d9420","resourceVersion":"468","creationTimestamp":"2024-10-14T14:03:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.99.72:2379","kubernetes.io/config.hash":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.mirror":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.seen":"2024-10-14T14:03:23.002167709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6892 chars]
	I1014 07:05:32.606089   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:32.606089   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:32.606089   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:32.606089   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:32.609633   11452 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 07:05:32.609633   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:32.609633   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:32.609633   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:32.609633   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:32 GMT
	I1014 07:05:32.609633   11452 round_trippers.go:580]     Audit-Id: 9c527b38-bbe3-4aef-8b11-141a8f78d9cd
	I1014 07:05:32.609633   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:32.609633   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:32.609633   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:33.090223   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/etcd-functional-572000
	I1014 07:05:33.090223   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:33.090223   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:33.090223   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:33.095382   11452 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 07:05:33.095507   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:33.095507   11452 round_trippers.go:580]     Audit-Id: d3cd2fc1-9fa7-4e61-82fb-d732556498ba
	I1014 07:05:33.095557   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:33.095557   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:33.095557   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:33.095557   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:33.095597   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:33 GMT
	I1014 07:05:33.095651   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-572000","namespace":"kube-system","uid":"0aaa43a1-8d67-4228-963a-de1b151d9420","resourceVersion":"468","creationTimestamp":"2024-10-14T14:03:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.99.72:2379","kubernetes.io/config.hash":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.mirror":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.seen":"2024-10-14T14:03:23.002167709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6892 chars]
	I1014 07:05:33.096609   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:33.096751   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:33.096751   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:33.096751   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:33.099753   11452 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 07:05:33.099930   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:33.099930   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:33.099930   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:33.099930   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:33.099930   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:33 GMT
	I1014 07:05:33.099930   11452 round_trippers.go:580]     Audit-Id: 25c9ac55-2d10-4828-bba5-b02d1d6d553b
	I1014 07:05:33.099930   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:33.100193   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:33.590250   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/etcd-functional-572000
	I1014 07:05:33.590250   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:33.590250   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:33.590250   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:33.595320   11452 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 07:05:33.595320   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:33.595320   11452 round_trippers.go:580]     Audit-Id: 02b67926-1ed9-4b0c-ae6c-fb09e0c577f7
	I1014 07:05:33.595440   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:33.595440   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:33.595440   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:33.595440   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:33.595440   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:33 GMT
	I1014 07:05:33.595960   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-572000","namespace":"kube-system","uid":"0aaa43a1-8d67-4228-963a-de1b151d9420","resourceVersion":"468","creationTimestamp":"2024-10-14T14:03:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.99.72:2379","kubernetes.io/config.hash":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.mirror":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.seen":"2024-10-14T14:03:23.002167709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6892 chars]
	I1014 07:05:33.596728   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:33.596826   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:33.596826   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:33.596826   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:33.601356   11452 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 07:05:33.601356   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:33.601356   11452 round_trippers.go:580]     Audit-Id: fc0e7ac4-aa17-4e92-9484-e1a99146ee9e
	I1014 07:05:33.601356   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:33.601356   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:33.601356   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:33.601356   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:33.601356   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:33 GMT
	I1014 07:05:33.602234   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:33.602234   11452 pod_ready.go:103] pod "etcd-functional-572000" in "kube-system" namespace has status "Ready":"False"
	I1014 07:05:34.090517   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/etcd-functional-572000
	I1014 07:05:34.090517   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:34.090517   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:34.090517   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:34.096190   11452 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 07:05:34.096190   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:34.096190   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:34.096190   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:34.096190   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:34.096190   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:34 GMT
	I1014 07:05:34.096190   11452 round_trippers.go:580]     Audit-Id: 09e4cf26-e948-4098-8b6b-d45fe615c138
	I1014 07:05:34.096190   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:34.096553   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-572000","namespace":"kube-system","uid":"0aaa43a1-8d67-4228-963a-de1b151d9420","resourceVersion":"468","creationTimestamp":"2024-10-14T14:03:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.99.72:2379","kubernetes.io/config.hash":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.mirror":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.seen":"2024-10-14T14:03:23.002167709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6892 chars]
	I1014 07:05:34.097209   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:34.097209   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:34.097209   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:34.097209   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:34.100780   11452 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 07:05:34.100803   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:34.100803   11452 round_trippers.go:580]     Audit-Id: e45856d2-33eb-4670-8d3e-b996cfadf5ce
	I1014 07:05:34.100803   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:34.100803   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:34.100803   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:34.100803   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:34.100867   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:34 GMT
	I1014 07:05:34.100899   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:34.590751   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/etcd-functional-572000
	I1014 07:05:34.590751   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:34.590751   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:34.590751   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:34.596110   11452 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 07:05:34.596110   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:34.596188   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:34.596188   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:34.596188   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:34 GMT
	I1014 07:05:34.596228   11452 round_trippers.go:580]     Audit-Id: acf6a9e2-ee39-4be4-a530-654a05885f5b
	I1014 07:05:34.596228   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:34.596228   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:34.596896   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-572000","namespace":"kube-system","uid":"0aaa43a1-8d67-4228-963a-de1b151d9420","resourceVersion":"468","creationTimestamp":"2024-10-14T14:03:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.99.72:2379","kubernetes.io/config.hash":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.mirror":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.seen":"2024-10-14T14:03:23.002167709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6892 chars]
	I1014 07:05:34.598102   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:34.598158   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:34.598158   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:34.598158   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:34.603965   11452 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 07:05:34.603965   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:34.603965   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:34.603965   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:34.603965   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:34 GMT
	I1014 07:05:34.603965   11452 round_trippers.go:580]     Audit-Id: 5c758a5e-aea6-47c3-8be7-3c95e61b7a8e
	I1014 07:05:34.603965   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:34.603965   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:34.604743   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:35.090777   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/etcd-functional-572000
	I1014 07:05:35.090885   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:35.090885   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:35.090885   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:35.094811   11452 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 07:05:35.094811   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:35.094811   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:35 GMT
	I1014 07:05:35.094811   11452 round_trippers.go:580]     Audit-Id: 7b32ceed-a880-4850-b3de-236d6c6b374d
	I1014 07:05:35.094965   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:35.094965   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:35.094965   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:35.094965   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:35.095122   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-572000","namespace":"kube-system","uid":"0aaa43a1-8d67-4228-963a-de1b151d9420","resourceVersion":"468","creationTimestamp":"2024-10-14T14:03:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.99.72:2379","kubernetes.io/config.hash":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.mirror":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.seen":"2024-10-14T14:03:23.002167709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6892 chars]
	I1014 07:05:35.095943   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:35.095943   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:35.095943   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:35.095943   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:35.098503   11452 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 07:05:35.098503   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:35.098503   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:35.098503   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:35.098503   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:35 GMT
	I1014 07:05:35.098503   11452 round_trippers.go:580]     Audit-Id: a27b9c89-3d1c-47cb-94a0-7e900e6f3ef7
	I1014 07:05:35.098503   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:35.098503   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:35.099188   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:35.591474   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/etcd-functional-572000
	I1014 07:05:35.591474   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:35.591474   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:35.591474   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:35.594849   11452 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 07:05:35.594940   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:35.594940   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:35.594940   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:35.594940   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:35.594940   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:35 GMT
	I1014 07:05:35.594940   11452 round_trippers.go:580]     Audit-Id: f01932c5-4ba5-4f90-aa56-124c058ca856
	I1014 07:05:35.594940   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:35.595296   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-572000","namespace":"kube-system","uid":"0aaa43a1-8d67-4228-963a-de1b151d9420","resourceVersion":"468","creationTimestamp":"2024-10-14T14:03:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.99.72:2379","kubernetes.io/config.hash":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.mirror":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.seen":"2024-10-14T14:03:23.002167709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6892 chars]
	I1014 07:05:35.595800   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:35.595800   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:35.595800   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:35.595800   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:35.602853   11452 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 07:05:35.602884   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:35.602914   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:35.602914   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:35.602914   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:35.602914   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:35.602914   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:35 GMT
	I1014 07:05:35.602914   11452 round_trippers.go:580]     Audit-Id: 36955db8-237a-4626-8b07-7e9b30f04f19
	I1014 07:05:35.602914   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:35.603641   11452 pod_ready.go:103] pod "etcd-functional-572000" in "kube-system" namespace has status "Ready":"False"
	I1014 07:05:36.091060   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/etcd-functional-572000
	I1014 07:05:36.091150   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:36.091208   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:36.091208   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:36.094729   11452 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 07:05:36.094836   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:36.094836   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:36.094836   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:36.094836   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:36.094898   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:36 GMT
	I1014 07:05:36.094898   11452 round_trippers.go:580]     Audit-Id: c5b39476-cca9-44d5-9913-2334efba313b
	I1014 07:05:36.094898   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:36.095013   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-572000","namespace":"kube-system","uid":"0aaa43a1-8d67-4228-963a-de1b151d9420","resourceVersion":"468","creationTimestamp":"2024-10-14T14:03:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.99.72:2379","kubernetes.io/config.hash":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.mirror":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.seen":"2024-10-14T14:03:23.002167709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6892 chars]
	I1014 07:05:36.096231   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:36.096285   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:36.096285   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:36.096285   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:36.100439   11452 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 07:05:36.100439   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:36.100507   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:36 GMT
	I1014 07:05:36.100507   11452 round_trippers.go:580]     Audit-Id: bda25144-92ac-43f6-8247-059ddb130688
	I1014 07:05:36.100507   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:36.100507   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:36.100507   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:36.100507   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:36.100827   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:36.590741   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/etcd-functional-572000
	I1014 07:05:36.590741   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:36.590741   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:36.590741   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:36.595884   11452 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 07:05:36.596013   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:36.596013   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:36.596013   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:36.596013   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:36 GMT
	I1014 07:05:36.596013   11452 round_trippers.go:580]     Audit-Id: 4d346f3e-85b9-4a8d-b58b-e2c59e11b4bb
	I1014 07:05:36.596013   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:36.596013   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:36.596558   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-572000","namespace":"kube-system","uid":"0aaa43a1-8d67-4228-963a-de1b151d9420","resourceVersion":"468","creationTimestamp":"2024-10-14T14:03:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.99.72:2379","kubernetes.io/config.hash":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.mirror":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.seen":"2024-10-14T14:03:23.002167709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6892 chars]
	I1014 07:05:36.597515   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:36.597640   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:36.597640   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:36.597640   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:36.603232   11452 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 07:05:36.603320   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:36.603320   11452 round_trippers.go:580]     Audit-Id: 6839589a-0c3a-4e6e-ab46-828e497a824b
	I1014 07:05:36.603320   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:36.603320   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:36.603320   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:36.603320   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:36.603320   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:36 GMT
	I1014 07:05:36.603501   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:37.091473   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/etcd-functional-572000
	I1014 07:05:37.091473   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:37.091473   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:37.091473   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:37.096476   11452 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 07:05:37.096476   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:37.096476   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:37.096476   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:37.096476   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:37.096476   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:37.096476   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:37 GMT
	I1014 07:05:37.096476   11452 round_trippers.go:580]     Audit-Id: ff546b13-6d5c-4e3d-a1a3-a46d84825d30
	I1014 07:05:37.096476   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-572000","namespace":"kube-system","uid":"0aaa43a1-8d67-4228-963a-de1b151d9420","resourceVersion":"468","creationTimestamp":"2024-10-14T14:03:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.99.72:2379","kubernetes.io/config.hash":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.mirror":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.seen":"2024-10-14T14:03:23.002167709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6892 chars]
	I1014 07:05:37.097801   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:37.097947   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:37.097947   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:37.097947   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:37.100965   11452 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 07:05:37.100965   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:37.100965   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:37.100965   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:37.100965   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:37.100965   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:37.100965   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:37 GMT
	I1014 07:05:37.100965   11452 round_trippers.go:580]     Audit-Id: 2564a7cd-bdc6-4691-888c-bd072da0b78b
	I1014 07:05:37.101514   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:37.591591   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/etcd-functional-572000
	I1014 07:05:37.591591   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:37.591591   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:37.591591   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:37.606672   11452 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1014 07:05:37.606672   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:37.606748   11452 round_trippers.go:580]     Audit-Id: f65de716-e463-4e2e-bea3-50109c586bce
	I1014 07:05:37.606748   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:37.606748   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:37.606748   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:37.606748   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:37.606748   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:37 GMT
	I1014 07:05:37.607015   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-572000","namespace":"kube-system","uid":"0aaa43a1-8d67-4228-963a-de1b151d9420","resourceVersion":"468","creationTimestamp":"2024-10-14T14:03:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.99.72:2379","kubernetes.io/config.hash":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.mirror":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.seen":"2024-10-14T14:03:23.002167709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6892 chars]
	I1014 07:05:37.608640   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:37.608691   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:37.608691   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:37.608691   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:37.615683   11452 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 07:05:37.615683   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:37.615683   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:37 GMT
	I1014 07:05:37.615683   11452 round_trippers.go:580]     Audit-Id: f188d351-3aa5-4af8-a449-dc995b28d89a
	I1014 07:05:37.615683   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:37.615683   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:37.615683   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:37.615683   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:37.615683   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:37.615683   11452 pod_ready.go:103] pod "etcd-functional-572000" in "kube-system" namespace has status "Ready":"False"
	I1014 07:05:38.090380   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/etcd-functional-572000
	I1014 07:05:38.090380   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:38.090380   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:38.090380   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:38.095723   11452 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 07:05:38.095823   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:38.095823   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:38.095823   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:38.095823   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:38.095823   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:38.095923   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:38 GMT
	I1014 07:05:38.095923   11452 round_trippers.go:580]     Audit-Id: 15b835e8-729e-41ed-9d6a-0abae380cf35
	I1014 07:05:38.096197   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-572000","namespace":"kube-system","uid":"0aaa43a1-8d67-4228-963a-de1b151d9420","resourceVersion":"468","creationTimestamp":"2024-10-14T14:03:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.99.72:2379","kubernetes.io/config.hash":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.mirror":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.seen":"2024-10-14T14:03:23.002167709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6892 chars]
	I1014 07:05:38.096951   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:38.096951   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:38.096951   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:38.096951   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:38.100300   11452 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 07:05:38.100370   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:38.100370   11452 round_trippers.go:580]     Audit-Id: b9b98fc4-5b40-468c-9421-77ea24c9c2f5
	I1014 07:05:38.100370   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:38.100370   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:38.100370   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:38.100433   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:38.100433   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:38 GMT
	I1014 07:05:38.100551   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:38.591194   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/etcd-functional-572000
	I1014 07:05:38.591273   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:38.591377   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:38.591377   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:38.596276   11452 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 07:05:38.596276   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:38.596276   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:38.596276   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:38.596394   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:38 GMT
	I1014 07:05:38.596394   11452 round_trippers.go:580]     Audit-Id: 3db38864-4bbc-4947-9ef9-825698fd132b
	I1014 07:05:38.596394   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:38.596394   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:38.596819   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-572000","namespace":"kube-system","uid":"0aaa43a1-8d67-4228-963a-de1b151d9420","resourceVersion":"468","creationTimestamp":"2024-10-14T14:03:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.99.72:2379","kubernetes.io/config.hash":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.mirror":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.seen":"2024-10-14T14:03:23.002167709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6892 chars]
	I1014 07:05:38.597825   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:38.597825   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:38.597929   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:38.597929   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:38.602780   11452 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 07:05:38.602780   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:38.602780   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:38.602780   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:38.602780   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:38.602780   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:38 GMT
	I1014 07:05:38.602780   11452 round_trippers.go:580]     Audit-Id: 702295a9-97d7-46c4-b1d3-a062972a9bfb
	I1014 07:05:38.602780   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:38.603532   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:39.090913   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/etcd-functional-572000
	I1014 07:05:39.090913   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:39.090913   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:39.090913   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:39.095569   11452 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 07:05:39.095607   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:39.095607   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:39.095607   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:39.095607   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:39.095607   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:39.095607   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:39 GMT
	I1014 07:05:39.095607   11452 round_trippers.go:580]     Audit-Id: 164f453c-107d-4408-be07-308d0d023e6f
	I1014 07:05:39.095888   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-572000","namespace":"kube-system","uid":"0aaa43a1-8d67-4228-963a-de1b151d9420","resourceVersion":"468","creationTimestamp":"2024-10-14T14:03:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.99.72:2379","kubernetes.io/config.hash":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.mirror":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.seen":"2024-10-14T14:03:23.002167709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6892 chars]
	I1014 07:05:39.096676   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:39.096676   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:39.096786   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:39.096786   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:39.099911   11452 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 07:05:39.100033   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:39.100033   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:39 GMT
	I1014 07:05:39.100155   11452 round_trippers.go:580]     Audit-Id: 2520bc1e-2d41-46c1-8bc8-c819d0f27dac
	I1014 07:05:39.100155   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:39.100155   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:39.100242   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:39.100242   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:39.100242   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:39.590708   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/etcd-functional-572000
	I1014 07:05:39.590708   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:39.590708   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:39.590708   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:39.595924   11452 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 07:05:39.596003   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:39.596003   11452 round_trippers.go:580]     Audit-Id: 11230194-e8c3-4636-bfc0-a098dc21ad24
	I1014 07:05:39.596003   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:39.596003   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:39.596003   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:39.596003   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:39.596003   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:39 GMT
	I1014 07:05:39.597579   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-572000","namespace":"kube-system","uid":"0aaa43a1-8d67-4228-963a-de1b151d9420","resourceVersion":"551","creationTimestamp":"2024-10-14T14:03:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.99.72:2379","kubernetes.io/config.hash":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.mirror":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.seen":"2024-10-14T14:03:23.002167709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6668 chars]
	I1014 07:05:39.597896   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:39.597896   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:39.597896   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:39.598426   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:39.601576   11452 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 07:05:39.601646   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:39.601646   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:39.601646   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:39.601646   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:39.601646   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:39.601646   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:39 GMT
	I1014 07:05:39.601646   11452 round_trippers.go:580]     Audit-Id: 8b2cc201-c5e6-44eb-b1a8-3411233be728
	I1014 07:05:39.601990   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:39.602044   11452 pod_ready.go:93] pod "etcd-functional-572000" in "kube-system" namespace has status "Ready":"True"
	I1014 07:05:39.602044   11452 pod_ready.go:82] duration metric: took 10.5120166s for pod "etcd-functional-572000" in "kube-system" namespace to be "Ready" ...
	I1014 07:05:39.602586   11452 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-functional-572000" in "kube-system" namespace to be "Ready" ...
	I1014 07:05:39.602586   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-572000
	I1014 07:05:39.602757   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:39.602757   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:39.602822   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:39.605937   11452 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 07:05:39.606010   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:39.606010   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:39.606010   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:39 GMT
	I1014 07:05:39.606078   11452 round_trippers.go:580]     Audit-Id: 3119674e-f236-4f59-b298-6f269106f8bf
	I1014 07:05:39.606078   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:39.606078   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:39.606078   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:39.606390   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-572000","namespace":"kube-system","uid":"cfc32caa-722d-454e-a73d-2251e1353c91","resourceVersion":"469","creationTimestamp":"2024-10-14T14:03:30Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.99.72:8441","kubernetes.io/config.hash":"3cbcace444fc499282545cf4eeaba920","kubernetes.io/config.mirror":"3cbcace444fc499282545cf4eeaba920","kubernetes.io/config.seen":"2024-10-14T14:03:30.409861925Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8138 chars]
	I1014 07:05:39.606965   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:39.606965   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:39.606965   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:39.606965   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:39.609748   11452 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 07:05:39.609748   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:39.609748   11452 round_trippers.go:580]     Audit-Id: 4312ba24-0ea9-42a1-9bda-3e8a0f17baa0
	I1014 07:05:39.609827   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:39.609827   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:39.609827   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:39.609873   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:39.609873   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:39 GMT
	I1014 07:05:39.610112   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:40.103601   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-572000
	I1014 07:05:40.103601   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:40.103601   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:40.103601   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:40.108301   11452 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 07:05:40.108301   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:40.108362   11452 round_trippers.go:580]     Audit-Id: e548474b-d47e-4080-bf2d-fd00add1b116
	I1014 07:05:40.108362   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:40.108362   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:40.108362   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:40.108362   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:40.108362   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:40 GMT
	I1014 07:05:40.108843   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-572000","namespace":"kube-system","uid":"cfc32caa-722d-454e-a73d-2251e1353c91","resourceVersion":"469","creationTimestamp":"2024-10-14T14:03:30Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.99.72:8441","kubernetes.io/config.hash":"3cbcace444fc499282545cf4eeaba920","kubernetes.io/config.mirror":"3cbcace444fc499282545cf4eeaba920","kubernetes.io/config.seen":"2024-10-14T14:03:30.409861925Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8138 chars]
	I1014 07:05:40.109498   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:40.109498   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:40.109498   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:40.109498   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:40.112800   11452 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 07:05:40.112894   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:40.112936   11452 round_trippers.go:580]     Audit-Id: b0e48e17-54ea-4e77-a3cc-8e01272871ec
	I1014 07:05:40.112936   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:40.112936   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:40.112973   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:40.112973   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:40.112973   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:40 GMT
	I1014 07:05:40.113100   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:40.603320   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-572000
	I1014 07:05:40.603320   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:40.603320   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:40.603320   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:40.606998   11452 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 07:05:40.606998   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:40.606998   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:40.606998   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:40.606998   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:40.607161   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:40 GMT
	I1014 07:05:40.607161   11452 round_trippers.go:580]     Audit-Id: b7e96395-3eed-42e4-877c-13a73264085f
	I1014 07:05:40.607161   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:40.607522   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-572000","namespace":"kube-system","uid":"cfc32caa-722d-454e-a73d-2251e1353c91","resourceVersion":"553","creationTimestamp":"2024-10-14T14:03:30Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.99.72:8441","kubernetes.io/config.hash":"3cbcace444fc499282545cf4eeaba920","kubernetes.io/config.mirror":"3cbcace444fc499282545cf4eeaba920","kubernetes.io/config.seen":"2024-10-14T14:03:30.409861925Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7894 chars]
	I1014 07:05:40.608167   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:40.608167   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:40.608167   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:40.608167   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:40.611259   11452 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 07:05:40.611336   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:40.611336   11452 round_trippers.go:580]     Audit-Id: b68c297b-de2b-466a-9966-a5d277daf784
	I1014 07:05:40.611336   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:40.611336   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:40.611336   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:40.611417   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:40.611417   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:40 GMT
	I1014 07:05:40.611632   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:40.612050   11452 pod_ready.go:93] pod "kube-apiserver-functional-572000" in "kube-system" namespace has status "Ready":"True"
	I1014 07:05:40.612124   11452 pod_ready.go:82] duration metric: took 1.009537s for pod "kube-apiserver-functional-572000" in "kube-system" namespace to be "Ready" ...
	I1014 07:05:40.612124   11452 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-functional-572000" in "kube-system" namespace to be "Ready" ...
	I1014 07:05:40.612271   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-572000
	I1014 07:05:40.612326   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:40.612326   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:40.612326   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:40.615580   11452 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 07:05:40.615580   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:40.615687   11452 round_trippers.go:580]     Audit-Id: 27640fba-a2be-4620-9693-053c413dea65
	I1014 07:05:40.615687   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:40.615687   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:40.615687   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:40.615687   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:40.615749   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:40 GMT
	I1014 07:05:40.615749   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-572000","namespace":"kube-system","uid":"9a671282-d3b8-4340-a751-e1c512a28d41","resourceVersion":"546","creationTimestamp":"2024-10-14T14:03:30Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"239e3c345451889122897a96102d6a4d","kubernetes.io/config.mirror":"239e3c345451889122897a96102d6a4d","kubernetes.io/config.seen":"2024-10-14T14:03:30.409863425Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7467 chars]
	I1014 07:05:40.616682   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:40.616682   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:40.616759   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:40.616759   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:40.619629   11452 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 07:05:40.619629   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:40.619629   11452 round_trippers.go:580]     Audit-Id: 0edaf4c3-f427-403e-8e37-79f99bd1f3e8
	I1014 07:05:40.619629   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:40.619629   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:40.619629   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:40.619629   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:40.619750   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:40 GMT
	I1014 07:05:40.619894   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:40.620422   11452 pod_ready.go:93] pod "kube-controller-manager-functional-572000" in "kube-system" namespace has status "Ready":"True"
	I1014 07:05:40.620501   11452 pod_ready.go:82] duration metric: took 8.3211ms for pod "kube-controller-manager-functional-572000" in "kube-system" namespace to be "Ready" ...
	I1014 07:05:40.620501   11452 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-5z6cf" in "kube-system" namespace to be "Ready" ...
	I1014 07:05:40.620501   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/kube-proxy-5z6cf
	I1014 07:05:40.620501   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:40.620501   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:40.620501   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:40.623373   11452 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 07:05:40.623373   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:40.623373   11452 round_trippers.go:580]     Audit-Id: d6c77a8a-25c2-498d-966d-263ce9da6e46
	I1014 07:05:40.623373   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:40.623373   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:40.623373   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:40.623469   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:40.623469   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:40 GMT
	I1014 07:05:40.623793   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5z6cf","generateName":"kube-proxy-","namespace":"kube-system","uid":"b623a456-b6e3-47a3-babf-66d8698f5a58","resourceVersion":"484","creationTimestamp":"2024-10-14T14:03:35Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8c31955a-f147-4d2f-aaa7-f1b84425e50e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c31955a-f147-4d2f-aaa7-f1b84425e50e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6396 chars]
	I1014 07:05:40.624478   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:40.624652   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:40.624652   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:40.624652   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:40.627110   11452 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 07:05:40.627110   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:40.627110   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:40.627110   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:40.627110   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:40 GMT
	I1014 07:05:40.627110   11452 round_trippers.go:580]     Audit-Id: 1f9b3b5d-6d55-42f9-9415-0cb3008875df
	I1014 07:05:40.627110   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:40.627110   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:40.628127   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:40.628127   11452 pod_ready.go:93] pod "kube-proxy-5z6cf" in "kube-system" namespace has status "Ready":"True"
	I1014 07:05:40.628127   11452 pod_ready.go:82] duration metric: took 7.626ms for pod "kube-proxy-5z6cf" in "kube-system" namespace to be "Ready" ...
	I1014 07:05:40.628127   11452 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-functional-572000" in "kube-system" namespace to be "Ready" ...
	I1014 07:05:40.628127   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-572000
	I1014 07:05:40.628127   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:40.628127   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:40.628127   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:40.631103   11452 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 07:05:40.631103   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:40.631103   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:40.631103   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:40.631103   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:40.631103   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:40.631103   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:40 GMT
	I1014 07:05:40.631103   11452 round_trippers.go:580]     Audit-Id: ba84bb39-7bc8-44bd-ad9f-834c5f35f85b
	I1014 07:05:40.632088   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-572000","namespace":"kube-system","uid":"3ba63999-5552-434c-92ef-a0adeba9bc26","resourceVersion":"548","creationTimestamp":"2024-10-14T14:03:30Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"19e1eb524d6543ed51c677a5177d0183","kubernetes.io/config.mirror":"19e1eb524d6543ed51c677a5177d0183","kubernetes.io/config.seen":"2024-10-14T14:03:30.409864625Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5197 chars]
	I1014 07:05:40.632088   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:40.632088   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:40.632088   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:40.632088   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:40.634101   11452 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 07:05:40.635095   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:40.635095   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:40.635095   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:40.635095   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:40.635095   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:40 GMT
	I1014 07:05:40.635095   11452 round_trippers.go:580]     Audit-Id: 005afddf-cb32-47b4-abf3-fe7cd2e4a31e
	I1014 07:05:40.635095   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:40.635095   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:40.635095   11452 pod_ready.go:93] pod "kube-scheduler-functional-572000" in "kube-system" namespace has status "Ready":"True"
	I1014 07:05:40.636094   11452 pod_ready.go:82] duration metric: took 7.967ms for pod "kube-scheduler-functional-572000" in "kube-system" namespace to be "Ready" ...
	I1014 07:05:40.636094   11452 pod_ready.go:39] duration metric: took 12.0686009s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 07:05:40.636094   11452 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 07:05:40.654203   11452 command_runner.go:130] > -16
	I1014 07:05:40.654307   11452 ops.go:34] apiserver oom_adj: -16
	I1014 07:05:40.654307   11452 kubeadm.go:597] duration metric: took 22.358526s to restartPrimaryControlPlane
	I1014 07:05:40.654307   11452 kubeadm.go:394] duration metric: took 22.4226449s to StartCluster
	I1014 07:05:40.654307   11452 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:05:40.654655   11452 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:05:40.655665   11452 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:05:40.657659   11452 start.go:235] Will wait 6m0s for node &{Name: IP:172.20.99.72 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:05:40.657659   11452 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 07:05:40.657659   11452 addons.go:69] Setting storage-provisioner=true in profile "functional-572000"
	I1014 07:05:40.657659   11452 addons.go:69] Setting default-storageclass=true in profile "functional-572000"
	I1014 07:05:40.657659   11452 addons.go:234] Setting addon storage-provisioner=true in "functional-572000"
	W1014 07:05:40.657659   11452 addons.go:243] addon storage-provisioner should already be in state true
	I1014 07:05:40.657659   11452 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-572000"
	I1014 07:05:40.657659   11452 config.go:182] Loaded profile config "functional-572000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:05:40.657659   11452 host.go:66] Checking if "functional-572000" exists ...
	I1014 07:05:40.660657   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-572000 ).state
	I1014 07:05:40.660657   11452 out.go:177] * Verifying Kubernetes components...
	I1014 07:05:40.661673   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-572000 ).state
	I1014 07:05:40.677658   11452 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:05:40.979838   11452 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 07:05:41.008041   11452 node_ready.go:35] waiting up to 6m0s for node "functional-572000" to be "Ready" ...
	I1014 07:05:41.008408   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:41.008408   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:41.008513   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:41.008513   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:41.012728   11452 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 07:05:41.012857   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:41.012857   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:41 GMT
	I1014 07:05:41.012857   11452 round_trippers.go:580]     Audit-Id: a2dd7813-35ae-4ded-91ce-3b2820367d39
	I1014 07:05:41.012857   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:41.012857   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:41.012857   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:41.012857   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:41.013186   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:41.013284   11452 node_ready.go:49] node "functional-572000" has status "Ready":"True"
	I1014 07:05:41.013284   11452 node_ready.go:38] duration metric: took 5.105ms for node "functional-572000" to be "Ready" ...
	I1014 07:05:41.013284   11452 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 07:05:41.013793   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods
	I1014 07:05:41.013793   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:41.013793   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:41.013793   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:41.017833   11452 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 07:05:41.017833   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:41.017833   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:41.017833   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:41.017833   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:41 GMT
	I1014 07:05:41.017833   11452 round_trippers.go:580]     Audit-Id: 78afc8db-b756-4cba-9dd6-89be35994fdb
	I1014 07:05:41.017833   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:41.017833   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:41.018807   11452 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"553"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-pst6d","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0b49ac95-9a84-453c-8a36-f2e2eb5d257a","resourceVersion":"483","creationTimestamp":"2024-10-14T14:03:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"c05f8713-7390-40d7-8e74-6872459faf55","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c05f8713-7390-40d7-8e74-6872459faf55\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51174 chars]
	I1014 07:05:41.020793   11452 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-pst6d" in "kube-system" namespace to be "Ready" ...
	I1014 07:05:41.020793   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-pst6d
	I1014 07:05:41.020793   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:41.020793   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:41.020793   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:41.023813   11452 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 07:05:41.023813   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:41.023813   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:41.023813   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:41 GMT
	I1014 07:05:41.023813   11452 round_trippers.go:580]     Audit-Id: 77ead514-8a93-4591-b05a-8be2c229d308
	I1014 07:05:41.023813   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:41.023813   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:41.023813   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:41.023813   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-pst6d","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0b49ac95-9a84-453c-8a36-f2e2eb5d257a","resourceVersion":"483","creationTimestamp":"2024-10-14T14:03:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"c05f8713-7390-40d7-8e74-6872459faf55","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c05f8713-7390-40d7-8e74-6872459faf55\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6704 chars]
	I1014 07:05:41.191344   11452 request.go:632] Waited for 166.5363ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:41.191344   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:41.191720   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:41.191720   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:41.191720   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:41.196676   11452 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 07:05:41.196793   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:41.196793   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:41.196860   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:41 GMT
	I1014 07:05:41.196860   11452 round_trippers.go:580]     Audit-Id: 2c047390-8073-460d-bf19-be24c7cb9591
	I1014 07:05:41.196860   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:41.196860   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:41.196860   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:41.197939   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:41.198583   11452 pod_ready.go:93] pod "coredns-7c65d6cfc9-pst6d" in "kube-system" namespace has status "Ready":"True"
	I1014 07:05:41.198583   11452 pod_ready.go:82] duration metric: took 177.7904ms for pod "coredns-7c65d6cfc9-pst6d" in "kube-system" namespace to be "Ready" ...
	I1014 07:05:41.198583   11452 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-572000" in "kube-system" namespace to be "Ready" ...
	I1014 07:05:41.391367   11452 request.go:632] Waited for 192.5445ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/etcd-functional-572000
	I1014 07:05:41.391367   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/etcd-functional-572000
	I1014 07:05:41.391367   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:41.391367   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:41.391367   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:41.396000   11452 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 07:05:41.396000   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:41.396167   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:41 GMT
	I1014 07:05:41.396167   11452 round_trippers.go:580]     Audit-Id: ec576f81-4aae-476a-bb22-4c9cf2973619
	I1014 07:05:41.396167   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:41.396167   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:41.396167   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:41.396167   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:41.396292   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-572000","namespace":"kube-system","uid":"0aaa43a1-8d67-4228-963a-de1b151d9420","resourceVersion":"551","creationTimestamp":"2024-10-14T14:03:28Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.99.72:2379","kubernetes.io/config.hash":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.mirror":"b77dd04ee881ae665b99c75cd6250572","kubernetes.io/config.seen":"2024-10-14T14:03:23.002167709Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6668 chars]
	I1014 07:05:41.590810   11452 request.go:632] Waited for 193.6818ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:41.591262   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:41.591262   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:41.591390   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:41.591390   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:41.599755   11452 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1014 07:05:41.599826   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:41.599963   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:41.599963   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:41.599963   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:41.599963   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:41.599963   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:41 GMT
	I1014 07:05:41.599963   11452 round_trippers.go:580]     Audit-Id: 6b1cff00-765b-4943-b939-c8e84c972076
	I1014 07:05:41.599963   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:41.601325   11452 pod_ready.go:93] pod "etcd-functional-572000" in "kube-system" namespace has status "Ready":"True"
	I1014 07:05:41.601457   11452 pod_ready.go:82] duration metric: took 402.6874ms for pod "etcd-functional-572000" in "kube-system" namespace to be "Ready" ...
	I1014 07:05:41.601515   11452 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-572000" in "kube-system" namespace to be "Ready" ...
	I1014 07:05:41.791488   11452 request.go:632] Waited for 189.7614ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-572000
	I1014 07:05:41.791488   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-572000
	I1014 07:05:41.791488   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:41.791488   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:41.791488   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:41.796160   11452 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 07:05:41.796242   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:41.796242   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:41 GMT
	I1014 07:05:41.796242   11452 round_trippers.go:580]     Audit-Id: 92dae956-05be-4b80-9979-28a9a76fbfed
	I1014 07:05:41.796242   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:41.796242   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:41.796242   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:41.796242   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:41.796869   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-572000","namespace":"kube-system","uid":"cfc32caa-722d-454e-a73d-2251e1353c91","resourceVersion":"553","creationTimestamp":"2024-10-14T14:03:30Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.99.72:8441","kubernetes.io/config.hash":"3cbcace444fc499282545cf4eeaba920","kubernetes.io/config.mirror":"3cbcace444fc499282545cf4eeaba920","kubernetes.io/config.seen":"2024-10-14T14:03:30.409861925Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7894 chars]
	I1014 07:05:41.991308   11452 request.go:632] Waited for 193.7142ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:41.991692   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:41.991692   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:41.991776   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:41.991776   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:41.996029   11452 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 07:05:41.996147   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:41.996147   11452 round_trippers.go:580]     Audit-Id: 71c5f7ac-d403-4bcc-9aa7-de001ede40c0
	I1014 07:05:41.996147   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:41.996147   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:41.996234   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:41.996289   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:41.996289   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:41 GMT
	I1014 07:05:41.996733   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:41.997486   11452 pod_ready.go:93] pod "kube-apiserver-functional-572000" in "kube-system" namespace has status "Ready":"True"
	I1014 07:05:41.997486   11452 pod_ready.go:82] duration metric: took 395.9091ms for pod "kube-apiserver-functional-572000" in "kube-system" namespace to be "Ready" ...
	I1014 07:05:41.997486   11452 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-functional-572000" in "kube-system" namespace to be "Ready" ...
	I1014 07:05:42.191086   11452 request.go:632] Waited for 193.5998ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-572000
	I1014 07:05:42.191086   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-572000
	I1014 07:05:42.191479   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:42.191538   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:42.191538   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:42.196336   11452 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 07:05:42.196336   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:42.196336   11452 round_trippers.go:580]     Audit-Id: acd22fea-7655-4683-bce9-218d40d8a57b
	I1014 07:05:42.196467   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:42.196467   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:42.196467   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:42.196467   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:42.196467   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:42 GMT
	I1014 07:05:42.197002   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-572000","namespace":"kube-system","uid":"9a671282-d3b8-4340-a751-e1c512a28d41","resourceVersion":"546","creationTimestamp":"2024-10-14T14:03:30Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"239e3c345451889122897a96102d6a4d","kubernetes.io/config.mirror":"239e3c345451889122897a96102d6a4d","kubernetes.io/config.seen":"2024-10-14T14:03:30.409863425Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7467 chars]
	I1014 07:05:42.391388   11452 request.go:632] Waited for 193.4152ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:42.391388   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:42.391388   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:42.391388   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:42.391388   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:42.395654   11452 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 07:05:42.395741   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:42.395741   11452 round_trippers.go:580]     Audit-Id: ba0f8c8a-2964-4fd8-806f-1575094e7d49
	I1014 07:05:42.395741   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:42.395741   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:42.395741   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:42.395741   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:42.395800   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:42 GMT
	I1014 07:05:42.395970   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:42.396652   11452 pod_ready.go:93] pod "kube-controller-manager-functional-572000" in "kube-system" namespace has status "Ready":"True"
	I1014 07:05:42.396712   11452 pod_ready.go:82] duration metric: took 399.2261ms for pod "kube-controller-manager-functional-572000" in "kube-system" namespace to be "Ready" ...
	I1014 07:05:42.396712   11452 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5z6cf" in "kube-system" namespace to be "Ready" ...
	I1014 07:05:42.591767   11452 request.go:632] Waited for 194.9749ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/kube-proxy-5z6cf
	I1014 07:05:42.591767   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/kube-proxy-5z6cf
	I1014 07:05:42.591767   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:42.591767   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:42.591767   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:42.596204   11452 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 07:05:42.596459   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:42.596459   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:42.596459   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:42.596459   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:42.596459   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:42.596574   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:42 GMT
	I1014 07:05:42.596574   11452 round_trippers.go:580]     Audit-Id: c60e8185-7603-4596-86c3-6932f22026f3
	I1014 07:05:42.597417   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-5z6cf","generateName":"kube-proxy-","namespace":"kube-system","uid":"b623a456-b6e3-47a3-babf-66d8698f5a58","resourceVersion":"484","creationTimestamp":"2024-10-14T14:03:35Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8c31955a-f147-4d2f-aaa7-f1b84425e50e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c31955a-f147-4d2f-aaa7-f1b84425e50e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6396 chars]
	I1014 07:05:42.791327   11452 request.go:632] Waited for 192.6322ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:42.791327   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:42.791327   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:42.791327   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:42.791327   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:42.795368   11452 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 07:05:42.795948   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:42.795948   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:42.795948   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:42.795948   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:42.795948   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:42 GMT
	I1014 07:05:42.795948   11452 round_trippers.go:580]     Audit-Id: 5200170f-3861-420d-b6f9-2bdfc3b13577
	I1014 07:05:42.795948   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:42.796295   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:42.796850   11452 pod_ready.go:93] pod "kube-proxy-5z6cf" in "kube-system" namespace has status "Ready":"True"
	I1014 07:05:42.796936   11452 pod_ready.go:82] duration metric: took 400.2235ms for pod "kube-proxy-5z6cf" in "kube-system" namespace to be "Ready" ...
	I1014 07:05:42.796936   11452 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-572000" in "kube-system" namespace to be "Ready" ...
	I1014 07:05:42.871271   11452 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:05:42.871495   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:05:42.872010   11452 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:05:42.872010   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:05:42.872396   11452 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:05:42.872974   11452 kapi.go:59] client config for functional-572000: &rest.Config{Host:"https://172.20.99.72:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2926ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 07:05:42.873819   11452 addons.go:234] Setting addon default-storageclass=true in "functional-572000"
	W1014 07:05:42.873819   11452 addons.go:243] addon default-storageclass should already be in state true
	I1014 07:05:42.874052   11452 host.go:66] Checking if "functional-572000" exists ...
	I1014 07:05:42.875012   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-572000 ).state
	I1014 07:05:42.877082   11452 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:05:42.879948   11452 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 07:05:42.879948   11452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 07:05:42.880057   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-572000 ).state
	I1014 07:05:42.990851   11452 request.go:632] Waited for 193.8285ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-572000
	I1014 07:05:42.991143   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-572000
	I1014 07:05:42.991143   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:42.991143   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:42.991143   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:42.997143   11452 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 07:05:42.997143   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:42.997235   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:42.997235   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:42.997235   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:42 GMT
	I1014 07:05:42.997235   11452 round_trippers.go:580]     Audit-Id: 488027bd-93f2-4421-9583-b3cf59640850
	I1014 07:05:42.997235   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:42.997235   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:42.997688   11452 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-572000","namespace":"kube-system","uid":"3ba63999-5552-434c-92ef-a0adeba9bc26","resourceVersion":"548","creationTimestamp":"2024-10-14T14:03:30Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"19e1eb524d6543ed51c677a5177d0183","kubernetes.io/config.mirror":"19e1eb524d6543ed51c677a5177d0183","kubernetes.io/config.seen":"2024-10-14T14:03:30.409864625Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5197 chars]
	I1014 07:05:43.191426   11452 request.go:632] Waited for 192.6574ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:43.191426   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes/functional-572000
	I1014 07:05:43.191426   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:43.191426   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:43.191426   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:43.197942   11452 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 07:05:43.197942   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:43.197942   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:43.197942   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:43 GMT
	I1014 07:05:43.197942   11452 round_trippers.go:580]     Audit-Id: ca9d13a2-bd62-460e-a18d-16925fbc3070
	I1014 07:05:43.197942   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:43.197942   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:43.197942   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:43.198273   11452 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-10-14T14:03:27Z","fieldsType":"FieldsV1", [truncated 4785 chars]
	I1014 07:05:43.198622   11452 pod_ready.go:93] pod "kube-scheduler-functional-572000" in "kube-system" namespace has status "Ready":"True"
	I1014 07:05:43.198763   11452 pod_ready.go:82] duration metric: took 401.8262ms for pod "kube-scheduler-functional-572000" in "kube-system" namespace to be "Ready" ...
	I1014 07:05:43.198763   11452 pod_ready.go:39] duration metric: took 2.1854764s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 07:05:43.198763   11452 api_server.go:52] waiting for apiserver process to appear ...
	I1014 07:05:43.209898   11452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 07:05:43.238675   11452 command_runner.go:130] > 4839
	I1014 07:05:43.238898   11452 api_server.go:72] duration metric: took 2.5812365s to wait for apiserver process to appear ...
	I1014 07:05:43.239033   11452 api_server.go:88] waiting for apiserver healthz status ...
	I1014 07:05:43.239033   11452 api_server.go:253] Checking apiserver healthz at https://172.20.99.72:8441/healthz ...
	I1014 07:05:43.247085   11452 api_server.go:279] https://172.20.99.72:8441/healthz returned 200:
	ok
	I1014 07:05:43.247156   11452 round_trippers.go:463] GET https://172.20.99.72:8441/version
	I1014 07:05:43.247156   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:43.247156   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:43.247156   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:43.248829   11452 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1014 07:05:43.248926   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:43.248926   11452 round_trippers.go:580]     Audit-Id: 54caee5d-143c-4057-bc77-204207670b7c
	I1014 07:05:43.248926   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:43.248926   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:43.248926   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:43.249033   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:43.249069   11452 round_trippers.go:580]     Content-Length: 263
	I1014 07:05:43.249069   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:43 GMT
	I1014 07:05:43.249069   11452 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1014 07:05:43.249179   11452 api_server.go:141] control plane version: v1.31.1
	I1014 07:05:43.249179   11452 api_server.go:131] duration metric: took 10.1452ms to wait for apiserver health ...
	I1014 07:05:43.249269   11452 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 07:05:43.391431   11452 request.go:632] Waited for 142.0982ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods
	I1014 07:05:43.391431   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods
	I1014 07:05:43.391431   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:43.391431   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:43.391431   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:43.397246   11452 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 07:05:43.397418   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:43.397418   11452 round_trippers.go:580]     Audit-Id: 253b8a67-e685-46cf-8a4d-4d48d077050e
	I1014 07:05:43.397418   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:43.397418   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:43.397418   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:43.397418   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:43.397418   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:43 GMT
	I1014 07:05:43.398394   11452 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"553"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-pst6d","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0b49ac95-9a84-453c-8a36-f2e2eb5d257a","resourceVersion":"483","creationTimestamp":"2024-10-14T14:03:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"c05f8713-7390-40d7-8e74-6872459faf55","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c05f8713-7390-40d7-8e74-6872459faf55\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51174 chars]
	I1014 07:05:43.401423   11452 system_pods.go:59] 7 kube-system pods found
	I1014 07:05:43.401533   11452 system_pods.go:61] "coredns-7c65d6cfc9-pst6d" [0b49ac95-9a84-453c-8a36-f2e2eb5d257a] Running
	I1014 07:05:43.401533   11452 system_pods.go:61] "etcd-functional-572000" [0aaa43a1-8d67-4228-963a-de1b151d9420] Running
	I1014 07:05:43.401533   11452 system_pods.go:61] "kube-apiserver-functional-572000" [cfc32caa-722d-454e-a73d-2251e1353c91] Running
	I1014 07:05:43.401533   11452 system_pods.go:61] "kube-controller-manager-functional-572000" [9a671282-d3b8-4340-a751-e1c512a28d41] Running
	I1014 07:05:43.401533   11452 system_pods.go:61] "kube-proxy-5z6cf" [b623a456-b6e3-47a3-babf-66d8698f5a58] Running
	I1014 07:05:43.401533   11452 system_pods.go:61] "kube-scheduler-functional-572000" [3ba63999-5552-434c-92ef-a0adeba9bc26] Running
	I1014 07:05:43.401533   11452 system_pods.go:61] "storage-provisioner" [8ec5c3c6-2441-47cb-9862-e6c87bce62c2] Running
	I1014 07:05:43.401533   11452 system_pods.go:74] duration metric: took 152.2632ms to wait for pod list to return data ...
	I1014 07:05:43.401533   11452 default_sa.go:34] waiting for default service account to be created ...
	I1014 07:05:43.591007   11452 request.go:632] Waited for 189.2545ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.99.72:8441/api/v1/namespaces/default/serviceaccounts
	I1014 07:05:43.591007   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/default/serviceaccounts
	I1014 07:05:43.591007   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:43.591007   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:43.591007   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:43.595848   11452 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 07:05:43.595848   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:43.595848   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:43 GMT
	I1014 07:05:43.595848   11452 round_trippers.go:580]     Audit-Id: 1997f2c2-d50f-4e58-b9e5-c31e77089d3d
	I1014 07:05:43.595848   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:43.595848   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:43.595848   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:43.595848   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:43.595848   11452 round_trippers.go:580]     Content-Length: 261
	I1014 07:05:43.596111   11452 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"553"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"f4b17566-3acf-4b0f-aea2-8fb8cff3aa0d","resourceVersion":"291","creationTimestamp":"2024-10-14T14:03:34Z"}}]}
	I1014 07:05:43.596435   11452 default_sa.go:45] found service account: "default"
	I1014 07:05:43.596565   11452 default_sa.go:55] duration metric: took 194.9108ms for default service account to be created ...
	I1014 07:05:43.596565   11452 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 07:05:43.790829   11452 request.go:632] Waited for 194.2632ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods
	I1014 07:05:43.790829   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/namespaces/kube-system/pods
	I1014 07:05:43.791189   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:43.791189   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:43.791189   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:43.797688   11452 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 07:05:43.797688   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:43.797688   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:43.797688   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:43.797688   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:43.797688   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:43.797688   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:43 GMT
	I1014 07:05:43.797688   11452 round_trippers.go:580]     Audit-Id: ad800463-047a-40a2-bc8b-120c3bdf7ea3
	I1014 07:05:43.798465   11452 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"553"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-pst6d","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"0b49ac95-9a84-453c-8a36-f2e2eb5d257a","resourceVersion":"483","creationTimestamp":"2024-10-14T14:03:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"c05f8713-7390-40d7-8e74-6872459faf55","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T14:03:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c05f8713-7390-40d7-8e74-6872459faf55\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51174 chars]
	I1014 07:05:43.801695   11452 system_pods.go:86] 7 kube-system pods found
	I1014 07:05:43.801729   11452 system_pods.go:89] "coredns-7c65d6cfc9-pst6d" [0b49ac95-9a84-453c-8a36-f2e2eb5d257a] Running
	I1014 07:05:43.801729   11452 system_pods.go:89] "etcd-functional-572000" [0aaa43a1-8d67-4228-963a-de1b151d9420] Running
	I1014 07:05:43.801729   11452 system_pods.go:89] "kube-apiserver-functional-572000" [cfc32caa-722d-454e-a73d-2251e1353c91] Running
	I1014 07:05:43.801794   11452 system_pods.go:89] "kube-controller-manager-functional-572000" [9a671282-d3b8-4340-a751-e1c512a28d41] Running
	I1014 07:05:43.801794   11452 system_pods.go:89] "kube-proxy-5z6cf" [b623a456-b6e3-47a3-babf-66d8698f5a58] Running
	I1014 07:05:43.801794   11452 system_pods.go:89] "kube-scheduler-functional-572000" [3ba63999-5552-434c-92ef-a0adeba9bc26] Running
	I1014 07:05:43.801794   11452 system_pods.go:89] "storage-provisioner" [8ec5c3c6-2441-47cb-9862-e6c87bce62c2] Running
	I1014 07:05:43.801794   11452 system_pods.go:126] duration metric: took 205.2281ms to wait for k8s-apps to be running ...
	I1014 07:05:43.801871   11452 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 07:05:43.813685   11452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 07:05:43.842386   11452 system_svc.go:56] duration metric: took 40.4927ms WaitForService to wait for kubelet
	I1014 07:05:43.842455   11452 kubeadm.go:582] duration metric: took 3.1847928s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:05:43.842455   11452 node_conditions.go:102] verifying NodePressure condition ...
	I1014 07:05:43.991647   11452 request.go:632] Waited for 149.0892ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.99.72:8441/api/v1/nodes
	I1014 07:05:43.991647   11452 round_trippers.go:463] GET https://172.20.99.72:8441/api/v1/nodes
	I1014 07:05:43.991647   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:43.991647   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:43.991647   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:43.999859   11452 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1014 07:05:43.999859   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:43.999859   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:43.999859   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:43 GMT
	I1014 07:05:44.000755   11452 round_trippers.go:580]     Audit-Id: f4de46e9-799a-4fca-9c0f-1bc46b99de7c
	I1014 07:05:44.000755   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:44.000755   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:44.000755   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:44.000755   11452 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"553"},"items":[{"metadata":{"name":"functional-572000","uid":"8679aa3a-5b35-4e4b-af7b-0576fab65eb9","resourceVersion":"467","creationTimestamp":"2024-10-14T14:03:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-572000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"functional-572000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T07_03_31_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4838 chars]
	I1014 07:05:44.000755   11452 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 07:05:44.000755   11452 node_conditions.go:123] node cpu capacity is 2
	I1014 07:05:44.000755   11452 node_conditions.go:105] duration metric: took 158.3003ms to run NodePressure ...
	I1014 07:05:44.001579   11452 start.go:241] waiting for startup goroutines ...
	I1014 07:05:45.111766   11452 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:05:45.111766   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:05:45.111932   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-572000 ).networkadapters[0]).ipaddresses[0]
	I1014 07:05:45.124779   11452 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:05:45.124779   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:05:45.124779   11452 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 07:05:45.124779   11452 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 07:05:45.125524   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-572000 ).state
	I1014 07:05:47.337126   11452 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:05:47.337126   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:05:47.337126   11452 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-572000 ).networkadapters[0]).ipaddresses[0]
	I1014 07:05:47.703417   11452 main.go:141] libmachine: [stdout =====>] : 172.20.99.72
	
	I1014 07:05:47.703417   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:05:47.704310   11452 sshutil.go:53] new ssh client: &{IP:172.20.99.72 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-572000\id_rsa Username:docker}
	I1014 07:05:47.848342   11452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 07:05:48.667434   11452 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I1014 07:05:48.667558   11452 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I1014 07:05:48.667558   11452 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I1014 07:05:48.667558   11452 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I1014 07:05:48.667558   11452 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I1014 07:05:48.667558   11452 command_runner.go:130] > pod/storage-provisioner configured
	I1014 07:05:49.848101   11452 main.go:141] libmachine: [stdout =====>] : 172.20.99.72
	
	I1014 07:05:49.848101   11452 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:05:49.849084   11452 sshutil.go:53] new ssh client: &{IP:172.20.99.72 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-572000\id_rsa Username:docker}
	I1014 07:05:49.982031   11452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 07:05:50.151137   11452 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I1014 07:05:50.151469   11452 round_trippers.go:463] GET https://172.20.99.72:8441/apis/storage.k8s.io/v1/storageclasses
	I1014 07:05:50.151532   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:50.151532   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:50.151532   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:50.155605   11452 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 07:05:50.155605   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:50.155605   11452 round_trippers.go:580]     Content-Length: 1273
	I1014 07:05:50.155605   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:50 GMT
	I1014 07:05:50.155721   11452 round_trippers.go:580]     Audit-Id: de6eaf44-d7b2-4c47-a091-8d64debf95c5
	I1014 07:05:50.155721   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:50.155721   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:50.155721   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:50.155721   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:50.155783   11452 request.go:1351] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"560"},"items":[{"metadata":{"name":"standard","uid":"311d581c-8420-45cc-9cdb-e7aa7c9ae730","resourceVersion":"391","creationTimestamp":"2024-10-14T14:03:44Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-10-14T14:03:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1014 07:05:50.156536   11452 request.go:1351] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"311d581c-8420-45cc-9cdb-e7aa7c9ae730","resourceVersion":"391","creationTimestamp":"2024-10-14T14:03:44Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-10-14T14:03:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1014 07:05:50.156536   11452 round_trippers.go:463] PUT https://172.20.99.72:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I1014 07:05:50.156536   11452 round_trippers.go:469] Request Headers:
	I1014 07:05:50.156536   11452 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:05:50.156536   11452 round_trippers.go:473]     Content-Type: application/json
	I1014 07:05:50.156536   11452 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:05:50.161156   11452 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 07:05:50.161156   11452 round_trippers.go:577] Response Headers:
	I1014 07:05:50.161156   11452 round_trippers.go:580]     Content-Type: application/json
	I1014 07:05:50.161156   11452 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 26a3c45b-1a4e-40ca-9f2c-6cbdd9243077
	I1014 07:05:50.161156   11452 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f61aac55-0a5c-497d-832f-4955f256e2fa
	I1014 07:05:50.161156   11452 round_trippers.go:580]     Content-Length: 1220
	I1014 07:05:50.161156   11452 round_trippers.go:580]     Date: Mon, 14 Oct 2024 14:05:50 GMT
	I1014 07:05:50.161156   11452 round_trippers.go:580]     Audit-Id: 2cfbd619-9aad-4c00-9cde-788ac8a4423d
	I1014 07:05:50.161156   11452 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 07:05:50.161156   11452 request.go:1351] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"311d581c-8420-45cc-9cdb-e7aa7c9ae730","resourceVersion":"391","creationTimestamp":"2024-10-14T14:03:44Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-10-14T14:03:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1014 07:05:50.164106   11452 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1014 07:05:50.168118   11452 addons.go:510] duration metric: took 9.5104485s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1014 07:05:50.168118   11452 start.go:246] waiting for cluster config update ...
	I1014 07:05:50.168118   11452 start.go:255] writing updated cluster config ...
	I1014 07:05:50.179106   11452 ssh_runner.go:195] Run: rm -f paused
	I1014 07:05:50.324394   11452 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 07:05:50.330373   11452 out.go:177] * Done! kubectl is now configured to use "functional-572000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 14 14:05:27 functional-572000 dockerd[3905]: time="2024-10-14T14:05:27.071695946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:05:27 functional-572000 dockerd[3905]: time="2024-10-14T14:05:27.071953247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:05:27 functional-572000 dockerd[3905]: time="2024-10-14T14:05:27.128814284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:05:27 functional-572000 dockerd[3905]: time="2024-10-14T14:05:27.129013285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:05:27 functional-572000 dockerd[3905]: time="2024-10-14T14:05:27.129095785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:05:27 functional-572000 dockerd[3905]: time="2024-10-14T14:05:27.129305786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:05:27 functional-572000 dockerd[3905]: time="2024-10-14T14:05:27.174385074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:05:27 functional-572000 dockerd[3905]: time="2024-10-14T14:05:27.174470074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:05:27 functional-572000 dockerd[3905]: time="2024-10-14T14:05:27.174491974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:05:27 functional-572000 dockerd[3905]: time="2024-10-14T14:05:27.174616475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:05:27 functional-572000 cri-dockerd[4171]: time="2024-10-14T14:05:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/569c4741117ba8e9d755ca63fa76e8f2de60f0921236f514012131ef3f3cabbc/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 14:05:27 functional-572000 cri-dockerd[4171]: time="2024-10-14T14:05:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f267b74ea0b7a52cd241ae5768370144e7351fbe4cb8b5a6c8c1540755802fa3/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 14:05:27 functional-572000 dockerd[3905]: time="2024-10-14T14:05:27.424395517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:05:27 functional-572000 dockerd[3905]: time="2024-10-14T14:05:27.429413138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:05:27 functional-572000 dockerd[3905]: time="2024-10-14T14:05:27.429607939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:05:27 functional-572000 dockerd[3905]: time="2024-10-14T14:05:27.430318842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:05:27 functional-572000 cri-dockerd[4171]: time="2024-10-14T14:05:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/898f0af994685358cb32cccf95e284a08a43c690fac59b17ffdc20ef7292878f/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 14:05:27 functional-572000 dockerd[3905]: time="2024-10-14T14:05:27.687996420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:05:27 functional-572000 dockerd[3905]: time="2024-10-14T14:05:27.688084221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:05:27 functional-572000 dockerd[3905]: time="2024-10-14T14:05:27.688098921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:05:27 functional-572000 dockerd[3905]: time="2024-10-14T14:05:27.690852832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:05:27 functional-572000 dockerd[3905]: time="2024-10-14T14:05:27.976980647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:05:27 functional-572000 dockerd[3905]: time="2024-10-14T14:05:27.982152469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:05:27 functional-572000 dockerd[3905]: time="2024-10-14T14:05:27.982180069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:05:27 functional-572000 dockerd[3905]: time="2024-10-14T14:05:27.982412570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4b46e3a68e70e       c69fa2e9cbf5f       2 minutes ago       Running             coredns                   1                   898f0af994685       coredns-7c65d6cfc9-pst6d
	027eea5674327       60c005f310ff3       2 minutes ago       Running             kube-proxy                1                   f267b74ea0b7a       kube-proxy-5z6cf
	51bfc0cc1ef5b       6e38f40d628db       2 minutes ago       Running             storage-provisioner       1                   569c4741117ba       storage-provisioner
	bb56e1c9dc49d       2e96e5913fc06       2 minutes ago       Running             etcd                      1                   98f481c79e7e1       etcd-functional-572000
	e7e171c91d162       6bab7719df100       2 minutes ago       Running             kube-apiserver            1                   e73ad7a96335a       kube-apiserver-functional-572000
	c72fe053ad316       175ffd71cce3d       2 minutes ago       Running             kube-controller-manager   1                   d8a710d547654       kube-controller-manager-functional-572000
	e85ef4c3bcaf5       9aa1fad941575       2 minutes ago       Running             kube-scheduler            1                   3f7e0630f9156       kube-scheduler-functional-572000
	230f6f1683790       6e38f40d628db       3 minutes ago       Exited              storage-provisioner       0                   2856135e1f5d7       storage-provisioner
	0cf202c76d509       c69fa2e9cbf5f       3 minutes ago       Exited              coredns                   0                   44ccf69831a78       coredns-7c65d6cfc9-pst6d
	d2fc2b8d7f0cf       60c005f310ff3       3 minutes ago       Exited              kube-proxy                0                   9d20c3e8340df       kube-proxy-5z6cf
	6eafcec504e7d       2e96e5913fc06       4 minutes ago       Exited              etcd                      0                   9b7c8a6fcd73e       etcd-functional-572000
	b3106c84c81f2       9aa1fad941575       4 minutes ago       Exited              kube-scheduler            0                   040754d37cd0f       kube-scheduler-functional-572000
	44129de291317       6bab7719df100       4 minutes ago       Exited              kube-apiserver            0                   0ae43ff1c1153       kube-apiserver-functional-572000
	19354c3ceec63       175ffd71cce3d       4 minutes ago       Exited              kube-controller-manager   0                   87b680ba1c14a       kube-controller-manager-functional-572000
	
	
	==> coredns [0cf202c76d50] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:51084 - 6392 "HINFO IN 1113943498342967883.4511044989308176730. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046204907s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[575274381]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Oct-2024 14:03:37.245) (total time: 30001ms):
	Trace[575274381]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (14:04:07.246)
	Trace[575274381]: [30.001190555s] [30.001190555s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[469699633]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Oct-2024 14:03:37.248) (total time: 30000ms):
	Trace[469699633]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (14:04:07.248)
	Trace[469699633]: [30.000515147s] [30.000515147s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[164175727]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (14-Oct-2024 14:03:37.247) (total time: 30001ms):
	Trace[164175727]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (14:04:07.249)
	Trace[164175727]: [30.001856156s] [30.001856156s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4b46e3a68e70] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37725 - 53695 "HINFO IN 1798171060095169347.1691999058617256445. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.144618496s
	
	
	==> describe nodes <==
	Name:               functional-572000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-572000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=functional-572000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_14T07_03_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 14:03:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-572000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:07:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 14:07:28 +0000   Mon, 14 Oct 2024 14:03:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 14:07:28 +0000   Mon, 14 Oct 2024 14:03:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 14:07:28 +0000   Mon, 14 Oct 2024 14:03:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 14:07:28 +0000   Mon, 14 Oct 2024 14:03:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.99.72
	  Hostname:    functional-572000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 33a1b4584c5f4e4f81be54aa9ab8d29b
	  System UUID:                8b611f32-ac99-4046-866c-59d4585e0d4f
	  Boot ID:                    2895bd37-f937-4f87-8c71-14d9deeffdaa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-pst6d                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     3m57s
	  kube-system                 etcd-functional-572000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m4s
	  kube-system                 kube-apiserver-functional-572000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-controller-manager-functional-572000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-proxy-5z6cf                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 kube-scheduler-functional-572000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m54s                  kube-proxy       
	  Normal  Starting                 2m4s                   kube-proxy       
	  Normal  NodeHasSufficientPID     4m2s                   kubelet          Node functional-572000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m2s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m2s                   kubelet          Node functional-572000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m2s                   kubelet          Node functional-572000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 4m2s                   kubelet          Starting kubelet.
	  Normal  NodeReady                4m                     kubelet          Node functional-572000 status is now: NodeReady
	  Normal  RegisteredNode           3m57s                  node-controller  Node functional-572000 event: Registered Node functional-572000 in Controller
	  Normal  Starting                 2m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m11s (x8 over 2m11s)  kubelet          Node functional-572000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m11s (x8 over 2m11s)  kubelet          Node functional-572000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m11s (x7 over 2m11s)  kubelet          Node functional-572000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m3s                   node-controller  Node functional-572000 event: Registered Node functional-572000 in Controller
	
	
	==> dmesg <==
	[  +0.107677] kauditd_printk_skb: 202 callbacks suppressed
	[  +5.001855] kauditd_printk_skb: 26 callbacks suppressed
	[  +0.641747] systemd-fstab-generator[1674]: Ignoring "noauto" option for root device
	[  +6.159470] systemd-fstab-generator[1826]: Ignoring "noauto" option for root device
	[  +0.095852] kauditd_printk_skb: 34 callbacks suppressed
	[  +8.022058] systemd-fstab-generator[2231]: Ignoring "noauto" option for root device
	[  +0.151673] kauditd_printk_skb: 62 callbacks suppressed
	[  +4.840157] systemd-fstab-generator[2341]: Ignoring "noauto" option for root device
	[  +0.226490] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.022809] kauditd_printk_skb: 71 callbacks suppressed
	[Oct14 14:05] systemd-fstab-generator[3420]: Ignoring "noauto" option for root device
	[  +0.647261] systemd-fstab-generator[3455]: Ignoring "noauto" option for root device
	[  +0.256308] systemd-fstab-generator[3467]: Ignoring "noauto" option for root device
	[  +0.322473] systemd-fstab-generator[3481]: Ignoring "noauto" option for root device
	[  +5.366011] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.920312] systemd-fstab-generator[4121]: Ignoring "noauto" option for root device
	[  +0.222846] systemd-fstab-generator[4132]: Ignoring "noauto" option for root device
	[  +0.206096] systemd-fstab-generator[4144]: Ignoring "noauto" option for root device
	[  +0.304319] systemd-fstab-generator[4159]: Ignoring "noauto" option for root device
	[  +0.958674] systemd-fstab-generator[4330]: Ignoring "noauto" option for root device
	[  +4.115319] systemd-fstab-generator[4453]: Ignoring "noauto" option for root device
	[  +0.116493] kauditd_printk_skb: 136 callbacks suppressed
	[  +5.502948] kauditd_printk_skb: 52 callbacks suppressed
	[ +14.007324] systemd-fstab-generator[5364]: Ignoring "noauto" option for root device
	[  +0.176289] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [6eafcec504e7] <==
	{"level":"info","ts":"2024-10-14T14:03:25.080539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3d2ca17af04ab86f became leader at term 2"}
	{"level":"info","ts":"2024-10-14T14:03:25.080685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3d2ca17af04ab86f elected leader 3d2ca17af04ab86f at term 2"}
	{"level":"info","ts":"2024-10-14T14:03:25.089134Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T14:03:25.104275Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"3d2ca17af04ab86f","local-member-attributes":"{Name:functional-572000 ClientURLs:[https://172.20.99.72:2379]}","request-path":"/0/members/3d2ca17af04ab86f/attributes","cluster-id":"80e5fe671234a582","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-14T14:03:25.104430Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T14:03:25.105010Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T14:03:25.112578Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T14:03:25.123261Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T14:03:25.124565Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-14T14:03:25.159015Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-14T14:03:25.162106Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-14T14:03:25.162616Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.20.99.72:2379"}
	{"level":"info","ts":"2024-10-14T14:03:25.173197Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e5fe671234a582","local-member-id":"3d2ca17af04ab86f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T14:03:25.189191Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T14:03:25.189300Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T14:05:02.376848Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-14T14:05:02.376898Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-572000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.20.99.72:2380"],"advertise-client-urls":["https://172.20.99.72:2379"]}
	{"level":"warn","ts":"2024-10-14T14:05:02.376955Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-14T14:05:02.377115Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-14T14:05:02.483693Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 172.20.99.72:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-14T14:05:02.483800Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 172.20.99.72:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-14T14:05:02.483859Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"3d2ca17af04ab86f","current-leader-member-id":"3d2ca17af04ab86f"}
	{"level":"info","ts":"2024-10-14T14:05:02.494907Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"172.20.99.72:2380"}
	{"level":"info","ts":"2024-10-14T14:05:02.495138Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"172.20.99.72:2380"}
	{"level":"info","ts":"2024-10-14T14:05:02.495158Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-572000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.20.99.72:2380"],"advertise-client-urls":["https://172.20.99.72:2379"]}
	
	
	==> etcd [bb56e1c9dc49] <==
	{"level":"info","ts":"2024-10-14T14:05:23.532166Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e5fe671234a582","local-member-id":"3d2ca17af04ab86f","added-peer-id":"3d2ca17af04ab86f","added-peer-peer-urls":["https://172.20.99.72:2380"]}
	{"level":"info","ts":"2024-10-14T14:05:23.531969Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T14:05:23.538638Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e5fe671234a582","local-member-id":"3d2ca17af04ab86f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T14:05:23.557311Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T14:05:23.557151Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-14T14:05:23.557181Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"172.20.99.72:2380"}
	{"level":"info","ts":"2024-10-14T14:05:23.562055Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"172.20.99.72:2380"}
	{"level":"info","ts":"2024-10-14T14:05:23.563185Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"3d2ca17af04ab86f","initial-advertise-peer-urls":["https://172.20.99.72:2380"],"listen-peer-urls":["https://172.20.99.72:2380"],"advertise-client-urls":["https://172.20.99.72:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.20.99.72:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-14T14:05:23.563572Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-14T14:05:24.684920Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3d2ca17af04ab86f is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-14T14:05:24.685265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3d2ca17af04ab86f became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-14T14:05:24.685447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3d2ca17af04ab86f received MsgPreVoteResp from 3d2ca17af04ab86f at term 2"}
	{"level":"info","ts":"2024-10-14T14:05:24.685609Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3d2ca17af04ab86f became candidate at term 3"}
	{"level":"info","ts":"2024-10-14T14:05:24.685690Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3d2ca17af04ab86f received MsgVoteResp from 3d2ca17af04ab86f at term 3"}
	{"level":"info","ts":"2024-10-14T14:05:24.685829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3d2ca17af04ab86f became leader at term 3"}
	{"level":"info","ts":"2024-10-14T14:05:24.685947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3d2ca17af04ab86f elected leader 3d2ca17af04ab86f at term 3"}
	{"level":"info","ts":"2024-10-14T14:05:24.696035Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"3d2ca17af04ab86f","local-member-attributes":"{Name:functional-572000 ClientURLs:[https://172.20.99.72:2379]}","request-path":"/0/members/3d2ca17af04ab86f/attributes","cluster-id":"80e5fe671234a582","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-14T14:05:24.696220Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T14:05:24.696613Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T14:05:24.697895Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T14:05:24.698822Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.20.99.72:2379"}
	{"level":"info","ts":"2024-10-14T14:05:24.705874Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-14T14:05:24.706139Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-14T14:05:24.707018Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T14:05:24.708101Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 14:07:32 up 6 min,  0 users,  load average: 0.23, 0.38, 0.19
	Linux functional-572000 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [44129de29131] <==
	W1014 14:05:11.561819       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 14:05:11.561877       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 14:05:11.565076       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 14:05:11.576299       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 14:05:11.605949       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 14:05:11.624997       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 14:05:11.719182       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 14:05:11.721820       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 14:05:11.727615       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 14:05:11.742208       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 14:05:11.742468       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 14:05:11.873483       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 14:05:11.905981       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 14:05:11.912697       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 14:05:11.944579       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 14:05:11.960894       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 14:05:12.029245       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 14:05:12.058594       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 14:05:12.062920       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 14:05:12.083056       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 14:05:12.183377       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 14:05:12.249854       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 14:05:12.262609       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 14:05:12.325104       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 14:05:12.345524       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e7e171c91d16] <==
	I1014 14:05:26.308001       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1014 14:05:26.308068       1 shared_informer.go:320] Caches are synced for configmaps
	I1014 14:05:26.308105       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1014 14:05:26.316637       1 aggregator.go:171] initial CRD sync complete...
	I1014 14:05:26.316934       1 autoregister_controller.go:144] Starting autoregister controller
	I1014 14:05:26.317115       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1014 14:05:26.317260       1 cache.go:39] Caches are synced for autoregister controller
	I1014 14:05:26.322874       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1014 14:05:26.332446       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1014 14:05:26.332484       1 policy_source.go:224] refreshing policies
	I1014 14:05:26.374010       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1014 14:05:26.374116       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1014 14:05:26.385538       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1014 14:05:26.385790       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1014 14:05:26.392544       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1014 14:05:26.417754       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 14:05:27.190050       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1014 14:05:27.993421       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.20.99.72]
	I1014 14:05:27.995559       1 controller.go:615] quota admission added evaluator for: endpoints
	I1014 14:05:28.009016       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 14:05:28.363339       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1014 14:05:28.402947       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1014 14:05:28.472629       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1014 14:05:28.533439       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 14:05:28.546312       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [19354c3ceec6] <==
	I1014 14:03:35.045940       1 shared_informer.go:320] Caches are synced for attach detach
	I1014 14:03:35.046709       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 14:03:35.077939       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1014 14:03:35.079633       1 shared_informer.go:320] Caches are synced for taint
	I1014 14:03:35.080548       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1014 14:03:35.106760       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-572000"
	I1014 14:03:35.106828       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1014 14:03:35.087675       1 shared_informer.go:320] Caches are synced for TTL
	I1014 14:03:35.163221       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-572000"
	I1014 14:03:35.528383       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 14:03:35.528692       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1014 14:03:35.544643       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 14:03:35.867129       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-572000"
	I1014 14:03:36.035920       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="324.687195ms"
	I1014 14:03:36.071205       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="35.181074ms"
	I1014 14:03:36.097710       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="26.46348ms"
	I1014 14:03:36.121544       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="22.990683ms"
	I1014 14:03:36.121703       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="43.7µs"
	I1014 14:03:38.003307       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="58.6µs"
	I1014 14:03:38.061010       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="44.801µs"
	I1014 14:03:38.075108       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="52.9µs"
	I1014 14:03:38.079424       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="32.6µs"
	I1014 14:03:40.857020       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-572000"
	I1014 14:04:17.703211       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="16.953093ms"
	I1014 14:04:17.705363       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="40.4µs"
	
	
	==> kube-controller-manager [c72fe053ad31] <==
	I1014 14:05:29.648490       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1014 14:05:29.648870       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-572000"
	I1014 14:05:29.649154       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1014 14:05:29.671463       1 shared_informer.go:320] Caches are synced for node
	I1014 14:05:29.671578       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1014 14:05:29.671656       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1014 14:05:29.671668       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1014 14:05:29.672080       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1014 14:05:29.672460       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-572000"
	I1014 14:05:29.677588       1 shared_informer.go:320] Caches are synced for persistent volume
	I1014 14:05:29.693534       1 shared_informer.go:320] Caches are synced for cronjob
	I1014 14:05:29.696560       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1014 14:05:29.702397       1 shared_informer.go:320] Caches are synced for GC
	I1014 14:05:29.795777       1 shared_informer.go:320] Caches are synced for disruption
	I1014 14:05:29.795927       1 shared_informer.go:320] Caches are synced for stateful set
	I1014 14:05:29.823693       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 14:05:29.831333       1 shared_informer.go:320] Caches are synced for endpoint
	I1014 14:05:29.845826       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1014 14:05:29.855571       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 14:05:30.255772       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 14:05:30.289809       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 14:05:30.289974       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1014 14:06:27.681837       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-572000"
	I1014 14:06:58.181953       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-572000"
	I1014 14:07:28.639329       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-572000"
	
	
	==> kube-proxy [027eea567432] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1014 14:05:28.024893       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1014 14:05:28.041509       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.20.99.72"]
	E1014 14:05:28.042062       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 14:05:28.141157       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 14:05:28.141201       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 14:05:28.141229       1 server_linux.go:169] "Using iptables Proxier"
	I1014 14:05:28.145145       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 14:05:28.145917       1 server.go:483] "Version info" version="v1.31.1"
	I1014 14:05:28.146265       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 14:05:28.147991       1 config.go:199] "Starting service config controller"
	I1014 14:05:28.148166       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 14:05:28.148344       1 config.go:105] "Starting endpoint slice config controller"
	I1014 14:05:28.148482       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 14:05:28.149188       1 config.go:328] "Starting node config controller"
	I1014 14:05:28.149353       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 14:05:28.248577       1 shared_informer.go:320] Caches are synced for service config
	I1014 14:05:28.248890       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 14:05:28.250516       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [d2fc2b8d7f0c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1014 14:03:37.052834       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1014 14:03:37.073544       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.20.99.72"]
	E1014 14:03:37.074138       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 14:03:37.176227       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 14:03:37.176289       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 14:03:37.176331       1 server_linux.go:169] "Using iptables Proxier"
	I1014 14:03:37.187374       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 14:03:37.194530       1 server.go:483] "Version info" version="v1.31.1"
	I1014 14:03:37.194620       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 14:03:37.196682       1 config.go:199] "Starting service config controller"
	I1014 14:03:37.196808       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 14:03:37.196842       1 config.go:105] "Starting endpoint slice config controller"
	I1014 14:03:37.196848       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 14:03:37.197766       1 config.go:328] "Starting node config controller"
	I1014 14:03:37.209057       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 14:03:37.297143       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 14:03:37.297234       1 shared_informer.go:320] Caches are synced for service config
	I1014 14:03:37.309395       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b3106c84c81f] <==
	E1014 14:03:28.174550       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:03:28.192669       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1014 14:03:28.193100       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 14:03:28.202295       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1014 14:03:28.202334       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:03:28.307194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1014 14:03:28.307520       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:03:28.374581       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1014 14:03:28.374968       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:03:28.412340       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1014 14:03:28.412764       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:03:28.445340       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1014 14:03:28.445613       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 14:03:28.576667       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1014 14:03:28.576783       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1014 14:03:28.577157       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1014 14:03:28.577313       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 14:03:28.727492       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1014 14:03:28.727561       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:03:28.727625       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1014 14:03:28.727968       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:03:28.767994       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1014 14:03:28.768256       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I1014 14:03:31.607835       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1014 14:05:02.405345       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e85ef4c3bcaf] <==
	I1014 14:05:24.257101       1 serving.go:386] Generated self-signed cert in-memory
	W1014 14:05:26.231984       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1014 14:05:26.232124       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1014 14:05:26.232267       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1014 14:05:26.232372       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 14:05:26.324301       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1014 14:05:26.324338       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 14:05:26.328104       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1014 14:05:26.328454       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 14:05:26.328774       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 14:05:26.328700       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 14:05:26.429985       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 14 14:05:23 functional-572000 kubelet[4460]: E1014 14:05:23.069456    4460 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8441/api/v1/nodes\": dial tcp 172.20.99.72:8441: connect: connection refused" node="functional-572000"
	Oct 14 14:05:23 functional-572000 kubelet[4460]: W1014 14:05:23.099386    4460 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.20.99.72:8441: connect: connection refused
	Oct 14 14:05:23 functional-572000 kubelet[4460]: E1014 14:05:23.099537    4460 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.20.99.72:8441: connect: connection refused" logger="UnhandledError"
	Oct 14 14:05:24 functional-572000 kubelet[4460]: I1014 14:05:24.672341    4460 kubelet_node_status.go:72] "Attempting to register node" node="functional-572000"
	Oct 14 14:05:26 functional-572000 kubelet[4460]: I1014 14:05:26.433794    4460 apiserver.go:52] "Watching apiserver"
	Oct 14 14:05:26 functional-572000 kubelet[4460]: I1014 14:05:26.439570    4460 kubelet_node_status.go:111] "Node was previously registered" node="functional-572000"
	Oct 14 14:05:26 functional-572000 kubelet[4460]: I1014 14:05:26.439940    4460 kubelet_node_status.go:75] "Successfully registered node" node="functional-572000"
	Oct 14 14:05:26 functional-572000 kubelet[4460]: I1014 14:05:26.440151    4460 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 14 14:05:26 functional-572000 kubelet[4460]: I1014 14:05:26.441501    4460 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 14 14:05:26 functional-572000 kubelet[4460]: I1014 14:05:26.475252    4460 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 14 14:05:26 functional-572000 kubelet[4460]: I1014 14:05:26.511754    4460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b623a456-b6e3-47a3-babf-66d8698f5a58-lib-modules\") pod \"kube-proxy-5z6cf\" (UID: \"b623a456-b6e3-47a3-babf-66d8698f5a58\") " pod="kube-system/kube-proxy-5z6cf"
	Oct 14 14:05:26 functional-572000 kubelet[4460]: I1014 14:05:26.512044    4460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8ec5c3c6-2441-47cb-9862-e6c87bce62c2-tmp\") pod \"storage-provisioner\" (UID: \"8ec5c3c6-2441-47cb-9862-e6c87bce62c2\") " pod="kube-system/storage-provisioner"
	Oct 14 14:05:26 functional-572000 kubelet[4460]: I1014 14:05:26.512175    4460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b623a456-b6e3-47a3-babf-66d8698f5a58-xtables-lock\") pod \"kube-proxy-5z6cf\" (UID: \"b623a456-b6e3-47a3-babf-66d8698f5a58\") " pod="kube-system/kube-proxy-5z6cf"
	Oct 14 14:05:27 functional-572000 kubelet[4460]: I1014 14:05:27.368965    4460 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f267b74ea0b7a52cd241ae5768370144e7351fbe4cb8b5a6c8c1540755802fa3"
	Oct 14 14:05:27 functional-572000 kubelet[4460]: I1014 14:05:27.677058    4460 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="569c4741117ba8e9d755ca63fa76e8f2de60f0921236f514012131ef3f3cabbc"
	Oct 14 14:06:21 functional-572000 kubelet[4460]: E1014 14:06:21.553447    4460 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:06:21 functional-572000 kubelet[4460]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:06:21 functional-572000 kubelet[4460]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:06:21 functional-572000 kubelet[4460]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:06:21 functional-572000 kubelet[4460]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:07:21 functional-572000 kubelet[4460]: E1014 14:07:21.546322    4460 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:07:21 functional-572000 kubelet[4460]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:07:21 functional-572000 kubelet[4460]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:07:21 functional-572000 kubelet[4460]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:07:21 functional-572000 kubelet[4460]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [230f6f168379] <==
	I1014 14:03:43.605015       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 14:03:43.626537       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 14:03:43.626659       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1014 14:03:43.640574       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 14:03:43.641042       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a2dc9f40-1cb4-4f15-9cf8-2e16139a9445", APIVersion:"v1", ResourceVersion:"388", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-572000_9e853796-9332-4e65-b51a-9bf911363d02 became leader
	I1014 14:03:43.641204       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-572000_9e853796-9332-4e65-b51a-9bf911363d02!
	I1014 14:03:43.742184       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-572000_9e853796-9332-4e65-b51a-9bf911363d02!
	
	
	==> storage-provisioner [51bfc0cc1ef5] <==
	I1014 14:05:27.615459       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 14:05:27.654962       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 14:05:27.655017       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1014 14:05:45.076495       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 14:05:45.076692       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-572000_7b8acb36-4396-433d-be93-9e6ded0b7b8c!
	I1014 14:05:45.077877       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a2dc9f40-1cb4-4f15-9cf8-2e16139a9445", APIVersion:"v1", ResourceVersion:"554", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-572000_7b8acb36-4396-433d-be93-9e6ded0b7b8c became leader
	I1014 14:05:45.179773       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-572000_7b8acb36-4396-433d-be93-9e6ded0b7b8c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-572000 -n functional-572000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-572000 -n functional-572000: (11.6485024s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-572000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (32.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-572000 service --namespace=default --https --url hello-node: exit status 1 (15.0115886s)
functional_test.go:1511: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-572000 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-572000 service hello-node --url --format={{.IP}}: exit status 1 (15.0141573s)
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-572000 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1548: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-572000 service hello-node --url: exit status 1 (15.0360511s)
functional_test.go:1561: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-572000 service hello-node --url": exit status 1
functional_test.go:1565: found endpoint for hello-node: 
functional_test.go:1573: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (436.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-132600 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E1014 07:19:10.843971     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 07:20:37.374684     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 07:20:37.382519     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 07:20:37.395279     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 07:20:37.417222     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 07:20:37.459353     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 07:20:37.541848     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 07:20:37.703735     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 07:20:38.027107     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 07:20:38.668856     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 07:20:39.951251     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 07:20:42.514360     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 07:20:47.636531     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 07:20:57.879398     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 07:21:18.362281     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 07:21:59.325330     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 07:23:21.247643     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 07:24:10.845073     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 07:25:37.375666     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:101: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p ha-132600 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: exit status 90 (6m42.8915157s)

                                                
                                                
-- stdout --
	* [ha-132600] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "ha-132600" primary control-plane node in "ha-132600" cluster
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	* Starting "ha-132600-m02" control-plane node in "ha-132600" cluster
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Found network options:
	  - NO_PROXY=172.20.108.120
	  - NO_PROXY=172.20.108.120
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:19:00.342865   13076 out.go:345] Setting OutFile to fd 1228 ...
	I1014 07:19:00.344546   13076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:19:00.344546   13076 out.go:358] Setting ErrFile to fd 824...
	I1014 07:19:00.344546   13076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:19:00.368247   13076 out.go:352] Setting JSON to false
	I1014 07:19:00.372277   13076 start.go:129] hostinfo: {"hostname":"minikube1","uptime":101054,"bootTime":1728814485,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5011 Build 19045.5011","kernelVersion":"10.0.19045.5011 Build 19045.5011","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1014 07:19:00.372803   13076 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:19:00.379728   13076 out.go:177] * [ha-132600] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	I1014 07:19:00.386820   13076 notify.go:220] Checking for updates...
	I1014 07:19:00.389571   13076 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:19:00.392679   13076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:19:00.395167   13076 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1014 07:19:00.398285   13076 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:19:00.400520   13076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:19:00.404118   13076 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:19:05.654471   13076 out.go:177] * Using the hyperv driver based on user configuration
	I1014 07:19:05.658285   13076 start.go:297] selected driver: hyperv
	I1014 07:19:05.658285   13076 start.go:901] validating driver "hyperv" against <nil>
	I1014 07:19:05.658285   13076 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:19:05.705211   13076 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 07:19:05.706541   13076 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:19:05.706541   13076 cni.go:84] Creating CNI manager for ""
	I1014 07:19:05.706541   13076 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1014 07:19:05.706541   13076 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 07:19:05.707066   13076 start.go:340] cluster config:
	{Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:19:05.707207   13076 iso.go:125] acquiring lock: {Name:mkb71635bcbccba29dce9048ad4cc71430a7e577 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:19:05.711264   13076 out.go:177] * Starting "ha-132600" primary control-plane node in "ha-132600" cluster
	I1014 07:19:05.715097   13076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:19:05.715262   13076 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1014 07:19:05.715344   13076 cache.go:56] Caching tarball of preloaded images
	I1014 07:19:05.715452   13076 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1014 07:19:05.715452   13076 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:19:05.716553   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:19:05.716898   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json: {Name:mkbd11bf3f90adebf1f4630d2e8deaac328748d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:19:05.717886   13076 start.go:360] acquireMachinesLock for ha-132600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:19:05.717886   13076 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-132600"
	I1014 07:19:05.718562   13076 start.go:93] Provisioning new machine with config: &{Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:19:05.718562   13076 start.go:125] createHost starting for "" (driver="hyperv")
	I1014 07:19:05.721891   13076 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:19:05.722555   13076 start.go:159] libmachine.API.Create for "ha-132600" (driver="hyperv")
	I1014 07:19:05.722555   13076 client.go:168] LocalClient.Create starting
	I1014 07:19:05.722555   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I1014 07:19:05.723316   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:19:05.723316   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:19:05.723316   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I1014 07:19:05.723877   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:19:05.723955   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:19:05.723955   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1014 07:19:07.752927   13076 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1014 07:19:07.753576   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:07.753793   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1014 07:19:09.445838   13076 main.go:141] libmachine: [stdout =====>] : False
	
	I1014 07:19:09.445838   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:09.446315   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:19:10.901034   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:19:10.901034   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:10.901034   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:19:14.320078   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:19:14.320309   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:14.323253   13076 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 07:19:14.829443   13076 main.go:141] libmachine: Creating SSH key...
	I1014 07:19:14.988345   13076 main.go:141] libmachine: Creating VM...
	I1014 07:19:14.988345   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:19:17.790932   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:19:17.791005   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:17.791145   13076 main.go:141] libmachine: Using switch "Default Switch"
	I1014 07:19:17.791145   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:19:19.529861   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:19:19.529861   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:19.529861   13076 main.go:141] libmachine: Creating VHD
	I1014 07:19:19.530436   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\fixed.vhd' -SizeBytes 10MB -Fixed
	I1014 07:19:23.105904   13076 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 267BC11D-A195-4636-8AE9-EF1D7966C95C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1014 07:19:23.105987   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:23.106037   13076 main.go:141] libmachine: Writing magic tar header
	I1014 07:19:23.106037   13076 main.go:141] libmachine: Writing SSH key tar header
	I1014 07:19:23.118040   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\disk.vhd' -VHDType Dynamic -DeleteSource
	I1014 07:19:26.201609   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:26.201950   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:26.202023   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\disk.vhd' -SizeBytes 20000MB
	I1014 07:19:28.768368   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:28.768789   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:28.768969   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-132600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1014 07:19:32.226690   13076 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-132600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1014 07:19:32.226915   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:32.226915   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-132600 -DynamicMemoryEnabled $false
	I1014 07:19:34.390731   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:34.390837   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:34.391049   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-132600 -Count 2
	I1014 07:19:36.449382   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:36.449628   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:36.449628   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-132600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\boot2docker.iso'
	I1014 07:19:38.932909   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:38.932909   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:38.932909   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-132600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\disk.vhd'
	I1014 07:19:41.509229   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:41.509229   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:41.509229   13076 main.go:141] libmachine: Starting VM...
	I1014 07:19:41.509229   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-132600
	I1014 07:19:44.718680   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:44.718680   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:44.719475   13076 main.go:141] libmachine: Waiting for host to start...
	I1014 07:19:44.719770   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:19:46.954823   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:19:46.954823   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:46.955829   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:19:49.408227   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:49.408227   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:50.409369   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:19:52.561607   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:19:52.561912   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:52.561912   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:19:55.081136   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:55.081187   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:56.082401   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:19:58.271497   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:19:58.271497   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:58.272384   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:00.740863   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:20:00.741070   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:01.741536   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:03.902654   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:03.902721   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:03.902721   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:06.378064   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:20:06.378064   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:07.379104   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:09.537329   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:09.537329   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:09.537541   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:12.074320   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:12.074320   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:12.074834   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:14.171613   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:14.171613   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:14.171613   13076 machine.go:93] provisionDockerMachine start ...
	I1014 07:20:14.172377   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:16.225265   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:16.225265   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:16.225634   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:18.692061   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:18.692061   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:18.698261   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:18.713495   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:18.713568   13076 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 07:20:18.839311   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 07:20:18.839517   13076 buildroot.go:166] provisioning hostname "ha-132600"
	I1014 07:20:18.839517   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:20.911254   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:20.911424   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:20.911601   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:23.370138   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:23.370756   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:23.377008   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:23.377773   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:23.377773   13076 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-132600 && echo "ha-132600" | sudo tee /etc/hostname
	I1014 07:20:23.537548   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-132600
	
	I1014 07:20:23.537548   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:25.571429   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:25.571429   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:25.572326   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:28.068677   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:28.068677   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:28.077026   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:28.078615   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:28.078615   13076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-132600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-132600/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-132600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 07:20:28.216809   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 07:20:28.216874   13076 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1014 07:20:28.216990   13076 buildroot.go:174] setting up certificates
	I1014 07:20:28.217016   13076 provision.go:84] configureAuth start
	I1014 07:20:28.217047   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:30.246758   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:30.247093   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:30.247368   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:32.688428   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:32.688428   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:32.688428   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:34.748908   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:34.748908   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:34.749349   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:37.221628   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:37.221798   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:37.221798   13076 provision.go:143] copyHostCerts
	I1014 07:20:37.221998   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1014 07:20:37.222444   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1014 07:20:37.222558   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1014 07:20:37.223110   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1014 07:20:37.224109   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1014 07:20:37.224599   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1014 07:20:37.224599   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1014 07:20:37.224959   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I1014 07:20:37.225332   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1014 07:20:37.225332   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1014 07:20:37.225332   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1014 07:20:37.226765   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1014 07:20:37.227733   13076 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-132600 san=[127.0.0.1 172.20.108.120 ha-132600 localhost minikube]
	I1014 07:20:37.731674   13076 provision.go:177] copyRemoteCerts
	I1014 07:20:37.744706   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 07:20:37.744706   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:39.801309   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:39.801309   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:39.801309   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:42.273306   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:42.273367   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:42.273367   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:20:42.378794   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6340227s)
	I1014 07:20:42.378847   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1014 07:20:42.378847   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I1014 07:20:42.424772   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1014 07:20:42.425289   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 07:20:42.471397   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1014 07:20:42.471964   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 07:20:42.527104   13076 provision.go:87] duration metric: took 14.3100395s to configureAuth
	I1014 07:20:42.527104   13076 buildroot.go:189] setting minikube options for container-runtime
	I1014 07:20:42.527712   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:20:42.528252   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:44.598308   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:44.598308   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:44.599305   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:47.040424   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:47.040637   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:47.045726   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:47.046398   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:47.046398   13076 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1014 07:20:47.167710   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1014 07:20:47.167801   13076 buildroot.go:70] root file system type: tmpfs
	I1014 07:20:47.168208   13076 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1014 07:20:47.168315   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:49.202111   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:49.202331   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:49.202424   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:51.648278   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:51.648278   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:51.654248   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:51.655052   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:51.655052   13076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1014 07:20:51.814048   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1014 07:20:51.814229   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:53.871643   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:53.871643   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:53.872030   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:56.286054   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:56.286054   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:56.292357   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:56.293125   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:56.293125   13076 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1014 07:20:58.458028   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1014 07:20:58.458028   13076 machine.go:96] duration metric: took 44.286361s to provisionDockerMachine
	I1014 07:20:58.458028   13076 client.go:171] duration metric: took 1m52.7353371s to LocalClient.Create
	I1014 07:20:58.458028   13076 start.go:167] duration metric: took 1m52.7353371s to libmachine.API.Create "ha-132600"
	I1014 07:20:58.458028   13076 start.go:293] postStartSetup for "ha-132600" (driver="hyperv")
	I1014 07:20:58.458028   13076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 07:20:58.471262   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 07:20:58.471262   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:00.512590   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:00.513162   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:00.513321   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:02.924795   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:02.924929   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:02.925163   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:21:03.032582   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5613141s)
	I1014 07:21:03.044373   13076 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 07:21:03.050831   13076 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 07:21:03.050831   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1014 07:21:03.051271   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1014 07:21:03.052198   13076 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> 9362.pem in /etc/ssl/certs
	I1014 07:21:03.052295   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /etc/ssl/certs/9362.pem
	I1014 07:21:03.063873   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 07:21:03.081968   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /etc/ssl/certs/9362.pem (1708 bytes)
	I1014 07:21:03.128637   13076 start.go:296] duration metric: took 4.6706031s for postStartSetup
	I1014 07:21:03.132031   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:05.196102   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:05.196622   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:05.196696   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:07.624940   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:07.625781   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:07.626103   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:21:07.629046   13076 start.go:128] duration metric: took 2m1.9103356s to createHost
	I1014 07:21:07.629046   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:09.687162   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:09.687420   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:09.687490   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:12.104556   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:12.104556   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:12.110165   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:21:12.110572   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:21:12.110572   13076 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 07:21:12.241524   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728915672.238854784
	
	I1014 07:21:12.241655   13076 fix.go:216] guest clock: 1728915672.238854784
	I1014 07:21:12.241726   13076 fix.go:229] Guest: 2024-10-14 07:21:12.238854784 -0700 PDT Remote: 2024-10-14 07:21:07.6290467 -0700 PDT m=+127.381500801 (delta=4.609808084s)
	I1014 07:21:12.241841   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:14.299338   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:14.299599   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:14.299599   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:16.767477   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:16.767665   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:16.775565   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:21:16.775565   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:21:16.775565   13076 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1728915672
	I1014 07:21:16.921038   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 14 14:21:12 UTC 2024
	
	I1014 07:21:16.921096   13076 fix.go:236] clock set: Mon Oct 14 14:21:12 UTC 2024
	 (err=<nil>)
	I1014 07:21:16.921169   13076 start.go:83] releasing machines lock for "ha-132600", held for 2m11.2030498s
	I1014 07:21:16.921319   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:18.988904   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:18.989898   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:18.989947   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:21.410167   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:21.410403   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:21.414482   13076 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1014 07:21:21.414683   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:21.423844   13076 ssh_runner.go:195] Run: cat /version.json
	I1014 07:21:21.423844   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:23.554001   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:23.554206   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:23.554362   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:23.565882   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:23.565882   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:23.565882   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:26.092249   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:26.092249   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:26.092379   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:21:26.119019   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:26.119086   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:26.119215   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:21:26.179573   13076 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.764971s)
	W1014 07:21:26.179670   13076 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1014 07:21:26.212879   13076 ssh_runner.go:235] Completed: cat /version.json: (4.7890281s)
	I1014 07:21:26.223815   13076 ssh_runner.go:195] Run: systemctl --version
	I1014 07:21:26.243251   13076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 07:21:26.251321   13076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 07:21:26.262431   13076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 07:21:26.290027   13076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 07:21:26.290027   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:21:26.290027   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1014 07:21:26.296743   13076 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1014 07:21:26.296743   13076 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1014 07:21:26.340326   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1014 07:21:26.370362   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1014 07:21:26.391878   13076 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 07:21:26.402847   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 07:21:26.433241   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:21:26.462619   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 07:21:26.491735   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:21:26.520404   13076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 07:21:26.550615   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 07:21:26.580278   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 07:21:26.608564   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 07:21:26.636849   13076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 07:21:26.656448   13076 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 07:21:26.667504   13076 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 07:21:26.700184   13076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 07:21:26.726834   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:26.916163   13076 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 07:21:26.953200   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:21:26.967627   13076 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1014 07:21:27.005080   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:21:27.039193   13076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 07:21:27.083701   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:21:27.118670   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:21:27.153508   13076 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 07:21:27.209689   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:21:27.230765   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:21:27.273213   13076 ssh_runner.go:195] Run: which cri-dockerd
	I1014 07:21:27.291783   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1014 07:21:27.308581   13076 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1014 07:21:27.351709   13076 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1014 07:21:27.539571   13076 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1014 07:21:27.725567   13076 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1014 07:21:27.725567   13076 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1014 07:21:27.768723   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:27.951142   13076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:21:30.486027   13076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5347969s)
	I1014 07:21:30.497722   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1014 07:21:30.531680   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 07:21:30.563742   13076 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1014 07:21:30.767218   13076 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1014 07:21:30.967093   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:31.191192   13076 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1014 07:21:31.229757   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 07:21:31.267426   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:31.462408   13076 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1014 07:21:31.569509   13076 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1014 07:21:31.581151   13076 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1014 07:21:31.591322   13076 start.go:563] Will wait 60s for crictl version
	I1014 07:21:31.604845   13076 ssh_runner.go:195] Run: which crictl
	I1014 07:21:31.622874   13076 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 07:21:31.680621   13076 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1014 07:21:31.689423   13076 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 07:21:31.732594   13076 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 07:21:31.766375   13076 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1014 07:21:31.766375   13076 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:08:65:4d Flags:up|broadcast|multicast|running}
	I1014 07:21:31.773955   13076 ip.go:214] interface addr: fe80::b548:ff79:d464:75b8/64
	I1014 07:21:31.773955   13076 ip.go:214] interface addr: 172.20.96.1/20
	I1014 07:21:31.784171   13076 ssh_runner.go:195] Run: grep 172.20.96.1	host.minikube.internal$ /etc/hosts
	I1014 07:21:31.791192   13076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 07:21:31.825649   13076 kubeadm.go:883] updating cluster {Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 07:21:31.825649   13076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:21:31.834667   13076 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 07:21:31.867707   13076 docker.go:689] Got preloaded images: 
	I1014 07:21:31.867707   13076 docker.go:695] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I1014 07:21:31.880238   13076 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1014 07:21:31.908893   13076 ssh_runner.go:195] Run: which lz4
	I1014 07:21:31.914374   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1014 07:21:31.924715   13076 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 07:21:31.930846   13076 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 07:21:31.931075   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I1014 07:21:33.829320   13076 docker.go:653] duration metric: took 1.9148657s to copy over tarball
	I1014 07:21:33.840382   13076 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 07:21:42.539043   13076 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6986506s)
	I1014 07:21:42.539174   13076 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 07:21:42.611065   13076 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1014 07:21:42.636639   13076 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I1014 07:21:42.681041   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:42.879066   13076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:21:46.161713   13076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.2824935s)
	I1014 07:21:46.173131   13076 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 07:21:46.198645   13076 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1014 07:21:46.198645   13076 cache_images.go:84] Images are preloaded, skipping loading
	I1014 07:21:46.198645   13076 kubeadm.go:934] updating node { 172.20.108.120 8443 v1.31.1 docker true true} ...
	I1014 07:21:46.198645   13076 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-132600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.108.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 07:21:46.209892   13076 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1014 07:21:46.283733   13076 cni.go:84] Creating CNI manager for ""
	I1014 07:21:46.283861   13076 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 07:21:46.283932   13076 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 07:21:46.284000   13076 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.108.120 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-132600 NodeName:ha-132600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.108.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.108.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 07:21:46.284290   13076 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.108.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-132600"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.20.108.120"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.108.120"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 07:21:46.284367   13076 kube-vip.go:115] generating kube-vip config ...
	I1014 07:21:46.295693   13076 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1014 07:21:46.324017   13076 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1014 07:21:46.324230   13076 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.20.111.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1014 07:21:46.334184   13076 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 07:21:46.349321   13076 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 07:21:46.360771   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 07:21:46.379224   13076 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I1014 07:21:46.412080   13076 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 07:21:46.441691   13076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1014 07:21:46.479025   13076 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1014 07:21:46.519461   13076 ssh_runner.go:195] Run: grep 172.20.111.254	control-plane.minikube.internal$ /etc/hosts
	I1014 07:21:46.525021   13076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.111.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 07:21:46.554943   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:46.730466   13076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 07:21:46.761153   13076 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600 for IP: 172.20.108.120
	I1014 07:21:46.761153   13076 certs.go:194] generating shared ca certs ...
	I1014 07:21:46.761153   13076 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:46.761825   13076 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I1014 07:21:46.762680   13076 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I1014 07:21:46.762846   13076 certs.go:256] generating profile certs ...
	I1014 07:21:46.763605   13076 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.key
	I1014 07:21:46.763605   13076 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.crt with IP's: []
	I1014 07:21:46.955865   13076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.crt ...
	I1014 07:21:46.955865   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.crt: {Name:mk4a5e51a4b9d62279c7ef4ef47716bcdf88d704 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:46.957857   13076 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.key ...
	I1014 07:21:46.957857   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.key: {Name:mkc80e99cfe4d8d17198a87fb188cfa1a72d727d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:46.958881   13076 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61
	I1014 07:21:46.958881   13076 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.108.120 172.20.111.254]
	I1014 07:21:47.375660   13076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61 ...
	I1014 07:21:47.375660   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61: {Name:mke858ab0ac103663123708f9814cfdffe03211e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.377216   13076 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61 ...
	I1014 07:21:47.377216   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61: {Name:mk972fca892a193d6b4e84d42e27ed86ce9f0457 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.377811   13076 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt
	I1014 07:21:47.391806   13076 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key
	I1014 07:21:47.392823   13076 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key
	I1014 07:21:47.392823   13076 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt with IP's: []
	I1014 07:21:47.675751   13076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt ...
	I1014 07:21:47.675751   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt: {Name:mk5a9f1239213dd0a0f36b4ee41d5ac56cbd79c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.677918   13076 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key ...
	I1014 07:21:47.677918   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key: {Name:mkb931094180fef1516b0d43acb6430ecbcbb3fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 07:21:47.679943   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 07:21:47.679943   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 07:21:47.679943   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 07:21:47.691415   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 07:21:47.693468   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem (1338 bytes)
	W1014 07:21:47.693468   13076 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936_empty.pem, impossibly tiny 0 bytes
	I1014 07:21:47.694004   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1014 07:21:47.694400   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1014 07:21:47.694671   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1014 07:21:47.695091   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1014 07:21:47.695960   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem (1708 bytes)
	I1014 07:21:47.696145   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem -> /usr/share/ca-certificates/936.pem
	I1014 07:21:47.696401   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /usr/share/ca-certificates/9362.pem
	I1014 07:21:47.696580   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:47.697580   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 07:21:47.742705   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 07:21:47.790988   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 07:21:47.837975   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 07:21:47.884872   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 07:21:47.936878   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 07:21:47.985330   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 07:21:48.029181   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 07:21:48.078824   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem --> /usr/share/ca-certificates/936.pem (1338 bytes)
	I1014 07:21:48.127907   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /usr/share/ca-certificates/9362.pem (1708 bytes)
	I1014 07:21:48.174542   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 07:21:48.218635   13076 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 07:21:48.264009   13076 ssh_runner.go:195] Run: openssl version
	I1014 07:21:48.282939   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 07:21:48.310256   13076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:48.318050   13076 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:44 /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:48.328768   13076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:48.348090   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 07:21:48.376484   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936.pem && ln -fs /usr/share/ca-certificates/936.pem /etc/ssl/certs/936.pem"
	I1014 07:21:48.408494   13076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936.pem
	I1014 07:21:48.416513   13076 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 14:00 /usr/share/ca-certificates/936.pem
	I1014 07:21:48.427815   13076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936.pem
	I1014 07:21:48.447033   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/936.pem /etc/ssl/certs/51391683.0"
	I1014 07:21:48.474293   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9362.pem && ln -fs /usr/share/ca-certificates/9362.pem /etc/ssl/certs/9362.pem"
	I1014 07:21:48.502293   13076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9362.pem
	I1014 07:21:48.509688   13076 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 14:00 /usr/share/ca-certificates/9362.pem
	I1014 07:21:48.519811   13076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9362.pem
	I1014 07:21:48.540403   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9362.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 07:21:48.569007   13076 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 07:21:48.575758   13076 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 07:21:48.576118   13076 kubeadm.go:392] StartCluster: {Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clu
sterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:21:48.584978   13076 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1014 07:21:48.619430   13076 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 07:21:48.647606   13076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 07:21:48.676291   13076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 07:21:48.693679   13076 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 07:21:48.693679   13076 kubeadm.go:157] found existing configuration files:
	
	I1014 07:21:48.703613   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 07:21:48.722067   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 07:21:48.732618   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 07:21:48.759836   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 07:21:48.777622   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 07:21:48.789748   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 07:21:48.818512   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 07:21:48.837396   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 07:21:48.848696   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 07:21:48.876926   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 07:21:48.893530   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 07:21:48.904372   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 07:21:48.920210   13076 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 07:21:49.392185   13076 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 07:22:04.140256   13076 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 07:22:04.140412   13076 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 07:22:04.140501   13076 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 07:22:04.140848   13076 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 07:22:04.141206   13076 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 07:22:04.141311   13076 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 07:22:04.144180   13076 out.go:235]   - Generating certificates and keys ...
	I1014 07:22:04.144373   13076 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 07:22:04.144575   13076 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 07:22:04.144939   13076 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 07:22:04.145172   13076 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1014 07:22:04.145315   13076 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1014 07:22:04.145521   13076 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1014 07:22:04.145845   13076 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1014 07:22:04.146212   13076 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-132600 localhost] and IPs [172.20.108.120 127.0.0.1 ::1]
	I1014 07:22:04.146365   13076 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1014 07:22:04.146614   13076 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-132600 localhost] and IPs [172.20.108.120 127.0.0.1 ::1]
	I1014 07:22:04.147003   13076 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 07:22:04.147200   13076 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 07:22:04.147236   13076 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1014 07:22:04.147236   13076 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 07:22:04.147236   13076 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 07:22:04.147236   13076 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 07:22:04.147954   13076 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 07:22:04.147987   13076 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 07:22:04.147987   13076 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 07:22:04.147987   13076 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 07:22:04.148785   13076 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 07:22:04.151235   13076 out.go:235]   - Booting up control plane ...
	I1014 07:22:04.151235   13076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 07:22:04.151819   13076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 07:22:04.152046   13076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 07:22:04.152333   13076 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 07:22:04.153426   13076 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 523.263843ms
	I1014 07:22:04.153534   13076 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 07:22:04.153534   13076 kubeadm.go:310] [api-check] The API server is healthy after 9.176461865s
	I1014 07:22:04.153534   13076 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 07:22:04.154243   13076 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 07:22:04.154243   13076 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 07:22:04.154779   13076 kubeadm.go:310] [mark-control-plane] Marking the node ha-132600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 07:22:04.154779   13076 kubeadm.go:310] [bootstrap-token] Using token: rm3d8a.inqsemtt5b5l5x2e
	I1014 07:22:04.157926   13076 out.go:235]   - Configuring RBAC rules ...
	I1014 07:22:04.158760   13076 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 07:22:04.158928   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 07:22:04.158928   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 07:22:04.159656   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 07:22:04.159851   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 07:22:04.160089   13076 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 07:22:04.160270   13076 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 07:22:04.160529   13076 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 07:22:04.160529   13076 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 07:22:04.160529   13076 kubeadm.go:310] 
	I1014 07:22:04.160529   13076 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 07:22:04.160529   13076 kubeadm.go:310] 
	I1014 07:22:04.160529   13076 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 07:22:04.160529   13076 kubeadm.go:310] 
	I1014 07:22:04.161100   13076 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 07:22:04.161186   13076 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 07:22:04.161352   13076 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 07:22:04.161352   13076 kubeadm.go:310] 
	I1014 07:22:04.161534   13076 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 07:22:04.161534   13076 kubeadm.go:310] 
	I1014 07:22:04.161706   13076 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 07:22:04.161706   13076 kubeadm.go:310] 
	I1014 07:22:04.161950   13076 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 07:22:04.161950   13076 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 07:22:04.161950   13076 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 07:22:04.161950   13076 kubeadm.go:310] 
	I1014 07:22:04.162515   13076 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 07:22:04.162607   13076 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 07:22:04.162607   13076 kubeadm.go:310] 
	I1014 07:22:04.162607   13076 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rm3d8a.inqsemtt5b5l5x2e \
	I1014 07:22:04.162607   13076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f876f951efbe7f1ed47ca578ee0d33e6ce8fe69c1a008a8154ee00f56ac84e46 \
	I1014 07:22:04.163162   13076 kubeadm.go:310] 	--control-plane 
	I1014 07:22:04.163261   13076 kubeadm.go:310] 
	I1014 07:22:04.163340   13076 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 07:22:04.163340   13076 kubeadm.go:310] 
	I1014 07:22:04.163570   13076 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rm3d8a.inqsemtt5b5l5x2e \
	I1014 07:22:04.163570   13076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f876f951efbe7f1ed47ca578ee0d33e6ce8fe69c1a008a8154ee00f56ac84e46 
	I1014 07:22:04.163570   13076 cni.go:84] Creating CNI manager for ""
	I1014 07:22:04.163570   13076 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 07:22:04.167124   13076 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1014 07:22:04.179134   13076 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 07:22:04.187701   13076 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1014 07:22:04.187701   13076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1014 07:22:04.233053   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1014 07:22:04.912894   13076 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 07:22:04.925574   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-132600 minikube.k8s.io/updated_at=2024_10_14T07_22_04_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=ha-132600 minikube.k8s.io/primary=true
	I1014 07:22:04.927795   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:04.958241   13076 ops.go:34] apiserver oom_adj: -16
	I1014 07:22:05.168034   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:05.667048   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:06.166157   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:06.667561   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:07.167283   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:07.305312   13076 kubeadm.go:1113] duration metric: took 2.3924152s to wait for elevateKubeSystemPrivileges
	I1014 07:22:07.306284   13076 kubeadm.go:394] duration metric: took 18.730142s to StartCluster
	I1014 07:22:07.306284   13076 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:22:07.306284   13076 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:22:07.308077   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:22:07.309777   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 07:22:07.309777   13076 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:22:07.309953   13076 start.go:241] waiting for startup goroutines ...
	I1014 07:22:07.309868   13076 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 07:22:07.310143   13076 addons.go:69] Setting storage-provisioner=true in profile "ha-132600"
	I1014 07:22:07.310235   13076 addons.go:234] Setting addon storage-provisioner=true in "ha-132600"
	I1014 07:22:07.310143   13076 addons.go:69] Setting default-storageclass=true in profile "ha-132600"
	I1014 07:22:07.310402   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:22:07.310402   13076 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-132600"
	I1014 07:22:07.310402   13076 host.go:66] Checking if "ha-132600" exists ...
	I1014 07:22:07.311457   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:07.311926   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:07.495748   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.96.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 07:22:08.130501   13076 start.go:971] {"host.minikube.internal": 172.20.96.1} host record injected into CoreDNS's ConfigMap
	I1014 07:22:09.594501   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:09.594501   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:09.594786   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:09.594878   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:09.595740   13076 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:22:09.596691   13076 kapi.go:59] client config for ha-132600: &rest.Config{Host:"https://172.20.111.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-132600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-132600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2926ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 07:22:09.597939   13076 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:22:09.598267   13076 cert_rotation.go:140] Starting client certificate rotation controller
	I1014 07:22:09.598703   13076 addons.go:234] Setting addon default-storageclass=true in "ha-132600"
	I1014 07:22:09.598956   13076 host.go:66] Checking if "ha-132600" exists ...
	I1014 07:22:09.600143   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:09.600143   13076 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 07:22:09.600247   13076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 07:22:09.600352   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:11.901872   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:11.901872   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:11.902855   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:22:11.913525   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:11.914554   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:11.914725   13076 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 07:22:11.914841   13076 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 07:22:11.915002   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:14.163865   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:14.163865   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:14.163865   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:22:14.597752   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:22:14.597752   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:14.598273   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:22:14.755906   13076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 07:22:16.766331   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:22:16.766798   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:16.767079   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:22:16.907784   13076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 07:22:17.107355   13076 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 07:22:17.107355   13076 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 07:22:17.108322   13076 round_trippers.go:463] GET https://172.20.111.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1014 07:22:17.108322   13076 round_trippers.go:469] Request Headers:
	I1014 07:22:17.108322   13076 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:22:17.108322   13076 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:22:17.120122   13076 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1014 07:22:17.120927   13076 round_trippers.go:463] PUT https://172.20.111.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1014 07:22:17.121006   13076 round_trippers.go:469] Request Headers:
	I1014 07:22:17.121006   13076 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:22:17.121006   13076 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:22:17.121089   13076 round_trippers.go:473]     Content-Type: application/json
	I1014 07:22:17.124876   13076 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 07:22:17.128782   13076 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1014 07:22:17.133792   13076 addons.go:510] duration metric: took 9.8239112s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1014 07:22:17.133792   13076 start.go:246] waiting for cluster config update ...
	I1014 07:22:17.133792   13076 start.go:255] writing updated cluster config ...
	I1014 07:22:17.140793   13076 out.go:201] 
	I1014 07:22:17.150794   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:22:17.151784   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:22:17.157758   13076 out.go:177] * Starting "ha-132600-m02" control-plane node in "ha-132600" cluster
	I1014 07:22:17.160381   13076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:22:17.160381   13076 cache.go:56] Caching tarball of preloaded images
	I1014 07:22:17.160381   13076 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1014 07:22:17.160381   13076 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:22:17.160381   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:22:17.168672   13076 start.go:360] acquireMachinesLock for ha-132600-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:22:17.169194   13076 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-132600-m02"
	I1014 07:22:17.169334   13076 start.go:93] Provisioning new machine with config: &{Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:22:17.169334   13076 start.go:125] createHost starting for "m02" (driver="hyperv")
	I1014 07:22:17.171957   13076 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:22:17.171957   13076 start.go:159] libmachine.API.Create for "ha-132600" (driver="hyperv")
	I1014 07:22:17.171957   13076 client.go:168] LocalClient.Create starting
	I1014 07:22:17.171957   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:22:17.173802   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:22:17.173802   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1014 07:22:19.094383   13076 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1014 07:22:19.094383   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:19.094970   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1014 07:22:20.808796   13076 main.go:141] libmachine: [stdout =====>] : False
	
	I1014 07:22:20.808796   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:20.808897   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:22:22.292959   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:22:22.292959   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:22.293074   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:22:25.811891   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:22:25.811891   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:25.813583   13076 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 07:22:26.334218   13076 main.go:141] libmachine: Creating SSH key...
	I1014 07:22:26.579833   13076 main.go:141] libmachine: Creating VM...
	I1014 07:22:26.580305   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:22:29.402237   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:22:29.402237   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:29.402237   13076 main.go:141] libmachine: Using switch "Default Switch"
	I1014 07:22:29.402237   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:22:31.181025   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:22:31.181025   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:31.181025   13076 main.go:141] libmachine: Creating VHD
	I1014 07:22:31.181025   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I1014 07:22:34.986705   13076 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 9BEBB030-6569-4A5D-A43D-C976E242B24D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1014 07:22:34.986953   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:34.986953   13076 main.go:141] libmachine: Writing magic tar header
	I1014 07:22:34.986953   13076 main.go:141] libmachine: Writing SSH key tar header
	I1014 07:22:34.998864   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I1014 07:22:38.117497   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:38.117497   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:38.117497   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\disk.vhd' -SizeBytes 20000MB
	I1014 07:22:40.750048   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:40.750048   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:40.750048   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-132600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1014 07:22:44.293371   13076 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-132600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1014 07:22:44.294009   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:44.294158   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-132600-m02 -DynamicMemoryEnabled $false
	I1014 07:22:46.495846   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:46.496106   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:46.496211   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-132600-m02 -Count 2
	I1014 07:22:48.637702   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:48.637702   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:48.637702   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-132600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\boot2docker.iso'
	I1014 07:22:51.128606   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:51.128606   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:51.128606   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-132600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\disk.vhd'
	I1014 07:22:53.765202   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:53.765775   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:53.765775   13076 main.go:141] libmachine: Starting VM...
	I1014 07:22:53.765847   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-132600-m02
	I1014 07:22:56.961396   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:56.961396   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:56.962040   13076 main.go:141] libmachine: Waiting for host to start...
	I1014 07:22:56.962040   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:22:59.251592   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:59.251592   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:59.251838   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:01.758067   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:01.758159   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:02.758293   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:04.921490   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:04.921565   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:04.921802   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:07.441595   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:07.441595   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:08.443053   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:10.659721   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:10.659721   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:10.660418   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:13.126618   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:13.126952   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:14.127183   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:16.307414   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:16.307414   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:16.307500   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:18.748902   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:18.749678   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:19.749962   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:21.893323   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:21.893323   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:21.893323   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:24.464351   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:24.465392   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:24.465471   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:26.567626   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:26.567626   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:26.567626   13076 machine.go:93] provisionDockerMachine start ...
	I1014 07:23:26.568400   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:28.681591   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:28.681591   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:28.682618   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:31.178308   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:31.178308   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:31.192309   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:31.207523   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:31.208024   13076 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 07:23:31.337861   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 07:23:31.337861   13076 buildroot.go:166] provisioning hostname "ha-132600-m02"
	I1014 07:23:31.337941   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:33.402839   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:33.402839   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:33.403340   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:35.894312   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:35.894312   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:35.901210   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:35.901699   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:35.901823   13076 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-132600-m02 && echo "ha-132600-m02" | sudo tee /etc/hostname
	I1014 07:23:36.071040   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-132600-m02
	
	I1014 07:23:36.071040   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:38.150707   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:38.151729   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:38.151729   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:40.639242   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:40.639242   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:40.644598   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:40.645419   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:40.645490   13076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-132600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-132600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-132600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 07:23:40.801974   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 07:23:40.801974   13076 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1014 07:23:40.801974   13076 buildroot.go:174] setting up certificates
	I1014 07:23:40.801974   13076 provision.go:84] configureAuth start
	I1014 07:23:40.801974   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:42.869101   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:42.869101   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:42.869101   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:45.382211   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:45.382688   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:45.382688   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:47.501182   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:47.501778   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:47.501832   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:50.008838   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:50.008838   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:50.009193   13076 provision.go:143] copyHostCerts
	I1014 07:23:50.009346   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1014 07:23:50.009346   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1014 07:23:50.009346   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1014 07:23:50.010015   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1014 07:23:50.011395   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1014 07:23:50.011967   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1014 07:23:50.011967   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1014 07:23:50.011967   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1014 07:23:50.013212   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1014 07:23:50.014109   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1014 07:23:50.014109   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1014 07:23:50.014507   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I1014 07:23:50.015649   13076 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-132600-m02 san=[127.0.0.1 172.20.111.83 ha-132600-m02 localhost minikube]
	I1014 07:23:50.130152   13076 provision.go:177] copyRemoteCerts
	I1014 07:23:50.140319   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 07:23:50.140319   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:52.242533   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:52.242533   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:52.243067   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:54.757999   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:54.758132   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:54.758132   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:23:54.867658   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7273333s)
	I1014 07:23:54.867658   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1014 07:23:54.867658   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 07:23:54.915126   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1014 07:23:54.915808   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 07:23:54.962775   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1014 07:23:54.963449   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 07:23:55.012733   13076 provision.go:87] duration metric: took 14.2107408s to configureAuth
	I1014 07:23:55.012733   13076 buildroot.go:189] setting minikube options for container-runtime
	I1014 07:23:55.013734   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:23:55.013734   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:57.174688   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:57.174688   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:57.175736   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:59.668037   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:59.668597   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:59.675074   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:59.675608   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:59.675873   13076 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1014 07:23:59.826626   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1014 07:23:59.826626   13076 buildroot.go:70] root file system type: tmpfs
	I1014 07:23:59.826926   13076 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1014 07:23:59.827038   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:01.963890   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:01.964568   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:01.964568   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:04.515125   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:04.515943   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:04.521824   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:04.522248   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:04.522248   13076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.20.108.120"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1014 07:24:04.691427   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.20.108.120
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1014 07:24:04.692090   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:06.819823   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:06.820339   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:06.820339   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:09.321630   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:09.321630   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:09.326748   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:09.327748   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:09.327808   13076 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1014 07:24:11.561493   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1014 07:24:11.561493   13076 machine.go:96] duration metric: took 44.993809s to provisionDockerMachine
	I1014 07:24:11.561493   13076 client.go:171] duration metric: took 1m54.3893961s to LocalClient.Create
	I1014 07:24:11.561493   13076 start.go:167] duration metric: took 1m54.3893961s to libmachine.API.Create "ha-132600"
	I1014 07:24:11.561493   13076 start.go:293] postStartSetup for "ha-132600-m02" (driver="hyperv")
	I1014 07:24:11.561493   13076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 07:24:11.572490   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 07:24:11.572490   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:13.652544   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:13.653401   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:13.653599   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:16.188638   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:16.188638   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:16.189304   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:24:16.298812   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7262867s)
	I1014 07:24:16.310231   13076 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 07:24:16.316927   13076 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 07:24:16.316927   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1014 07:24:16.317465   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1014 07:24:16.318548   13076 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> 9362.pem in /etc/ssl/certs
	I1014 07:24:16.318548   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /etc/ssl/certs/9362.pem
	I1014 07:24:16.332285   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 07:24:16.351469   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /etc/ssl/certs/9362.pem (1708 bytes)
	I1014 07:24:16.401820   13076 start.go:296] duration metric: took 4.8403207s for postStartSetup
	I1014 07:24:16.404096   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:18.500218   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:18.501234   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:18.501234   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:20.974982   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:20.975125   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:20.975125   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:24:20.978124   13076 start.go:128] duration metric: took 2m3.8086365s to createHost
	I1014 07:24:20.978209   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:23.112937   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:23.113001   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:23.113001   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:25.636675   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:25.637206   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:25.641953   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:25.642670   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:25.642670   13076 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 07:24:25.768945   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728915865.766702127
	
	I1014 07:24:25.769174   13076 fix.go:216] guest clock: 1728915865.766702127
	I1014 07:24:25.769174   13076 fix.go:229] Guest: 2024-10-14 07:24:25.766702127 -0700 PDT Remote: 2024-10-14 07:24:20.978124 -0700 PDT m=+320.730336401 (delta=4.788578127s)
	I1014 07:24:25.769174   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:27.845838   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:27.845838   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:27.845838   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:30.364175   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:30.364263   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:30.369382   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:30.369967   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:30.370049   13076 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1728915865
	I1014 07:24:30.508807   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 14 14:24:25 UTC 2024
	
	I1014 07:24:30.508807   13076 fix.go:236] clock set: Mon Oct 14 14:24:25 UTC 2024
	 (err=<nil>)
	I1014 07:24:30.508807   13076 start.go:83] releasing machines lock for "ha-132600-m02", held for 2m13.3394475s
	I1014 07:24:30.508807   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:32.570516   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:32.570516   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:32.570516   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:35.081338   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:35.081338   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:35.084330   13076 out.go:177] * Found network options:
	I1014 07:24:35.087197   13076 out.go:177]   - NO_PROXY=172.20.108.120
	W1014 07:24:35.089531   13076 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 07:24:35.091879   13076 out.go:177]   - NO_PROXY=172.20.108.120
	W1014 07:24:35.094409   13076 proxy.go:119] fail to check proxy env: Error ip not in block
	W1014 07:24:35.095622   13076 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 07:24:35.098960   13076 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1014 07:24:35.099124   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:35.109488   13076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1014 07:24:35.109488   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:37.293745   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:37.293818   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:37.293870   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:37.344951   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:37.344951   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:37.344951   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:39.933767   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:39.934139   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:39.934445   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:24:39.999458   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:39.999458   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:39.999458   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:24:40.039802   13076 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.930308s)
	W1014 07:24:40.039802   13076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 07:24:40.051889   13076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 07:24:40.057601   13076 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9585844s)
	W1014 07:24:40.057601   13076 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1014 07:24:40.090774   13076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 07:24:40.090774   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:24:40.091315   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:24:40.139757   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1014 07:24:40.172033   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1014 07:24:40.178804   13076 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1014 07:24:40.178804   13076 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1014 07:24:40.194040   13076 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 07:24:40.209169   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 07:24:40.241042   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:24:40.273436   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 07:24:40.304842   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:24:40.336705   13076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 07:24:40.371635   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 07:24:40.404378   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 07:24:40.435805   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 07:24:40.485319   13076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 07:24:40.505473   13076 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 07:24:40.516975   13076 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 07:24:40.550905   13076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 07:24:40.581632   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:24:40.785773   13076 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 07:24:40.819978   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:24:40.830992   13076 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1014 07:24:40.876801   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:24:40.913930   13076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 07:24:40.966444   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:24:41.006432   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:24:41.044531   13076 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 07:24:41.114617   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:24:41.139995   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:24:41.187779   13076 ssh_runner.go:195] Run: which cri-dockerd
	I1014 07:24:41.207367   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1014 07:24:41.226988   13076 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1014 07:24:41.272924   13076 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1014 07:24:41.470204   13076 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1014 07:24:41.650925   13076 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1014 07:24:41.650925   13076 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1014 07:24:41.694417   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:24:41.893295   13076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:25:43.008408   13076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.115034s)
	I1014 07:25:43.020423   13076 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1014 07:25:43.059257   13076 out.go:201] 
	W1014 07:25:43.062124   13076 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 14 14:24:09 ha-132600-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.938489944Z" level=info msg="Starting up"
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.941531731Z" level=info msg="containerd not running, starting managed containerd"
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.942638427Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	Oct 14 14:24:09 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:09.976195986Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.003978070Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004049170Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004119469Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004233069Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004318268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004457268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004664767Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004761167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004782067Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004795467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004887666Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.005331964Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008453752Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008545052Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008673551Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008759151Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008859951Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008988550Z" level=info msg="metadata content store policy set" policy=shared
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034095951Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034335550Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034407650Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034432250Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034449450Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034603849Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034945948Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035129847Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035263947Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035294447Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035312846Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035328546Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035343646Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035419846Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035443346Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035495946Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035526946Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035542746Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035565345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035583545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035598745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035630745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035648245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035663945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035689645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035709545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035731545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035750545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035764245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035777845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035792145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035809345Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035833944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035849444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035864444Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036025444Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036057644Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036074443Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036087843Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036099043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036115343Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036131843Z" level=info msg="NRI interface is disabled by configuration."
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036482842Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036737141Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036799541Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036822041Z" level=info msg="containerd successfully booted in 0.062283s"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.009541514Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.046546477Z" level=info msg="Loading containers: start."
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.205429391Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.426461627Z" level=info msg="Loading containers: done."
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448023414Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448125314Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448200613Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448511212Z" level=info msg="Daemon has completed initialization"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.556153647Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.556338446Z" level=info msg="API listen on [::]:2376"
	Oct 14 14:24:11 ha-132600-m02 systemd[1]: Started Docker Application Container Engine.
	Oct 14 14:24:41 ha-132600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.916201367Z" level=info msg="Processing signal 'terminated'"
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.918697964Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919058063Z" level=info msg="Daemon shutdown complete"
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919215663Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919252063Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: docker.service: Deactivated successfully.
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 14 14:24:42 ha-132600-m02 dockerd[1075]: time="2024-10-14T14:24:42.978494717Z" level=info msg="Starting up"
	Oct 14 14:25:43 ha-132600-m02 dockerd[1075]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 14 14:24:09 ha-132600-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.938489944Z" level=info msg="Starting up"
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.941531731Z" level=info msg="containerd not running, starting managed containerd"
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.942638427Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	Oct 14 14:24:09 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:09.976195986Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.003978070Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004049170Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004119469Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004233069Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004318268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004457268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004664767Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004761167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004782067Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004795467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004887666Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.005331964Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008453752Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008545052Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008673551Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008759151Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008859951Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008988550Z" level=info msg="metadata content store policy set" policy=shared
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034095951Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034335550Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034407650Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034432250Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034449450Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034603849Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034945948Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035129847Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035263947Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035294447Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035312846Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035328546Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035343646Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035419846Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035443346Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035495946Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035526946Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035542746Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035565345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035583545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035598745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035630745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035648245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035663945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035689645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035709545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035731545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035750545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035764245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035777845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035792145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035809345Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035833944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035849444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035864444Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036025444Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036057644Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036074443Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036087843Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036099043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036115343Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036131843Z" level=info msg="NRI interface is disabled by configuration."
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036482842Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036737141Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036799541Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036822041Z" level=info msg="containerd successfully booted in 0.062283s"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.009541514Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.046546477Z" level=info msg="Loading containers: start."
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.205429391Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.426461627Z" level=info msg="Loading containers: done."
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448023414Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448125314Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448200613Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448511212Z" level=info msg="Daemon has completed initialization"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.556153647Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.556338446Z" level=info msg="API listen on [::]:2376"
	Oct 14 14:24:11 ha-132600-m02 systemd[1]: Started Docker Application Container Engine.
	Oct 14 14:24:41 ha-132600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.916201367Z" level=info msg="Processing signal 'terminated'"
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.918697964Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919058063Z" level=info msg="Daemon shutdown complete"
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919215663Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919252063Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: docker.service: Deactivated successfully.
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 14 14:24:42 ha-132600-m02 dockerd[1075]: time="2024-10-14T14:24:42.978494717Z" level=info msg="Starting up"
	Oct 14 14:25:43 ha-132600-m02 dockerd[1075]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1014 07:25:43.062124   13076 out.go:270] * 
	* 
	W1014 07:25:43.062827   13076 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:25:43.067027   13076 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-windows-amd64.exe start -p ha-132600 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv" : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-132600 -n ha-132600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-132600 -n ha-132600: (11.7988848s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-132600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-132600 logs -n 25: (8.2138608s)
helpers_test.go:252: TestMultiControlPlane/serial/StartCluster logs: 
-- stdout --
	
	==> Audit <==
	|----------------|---------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	|    Command     |                 Args                  |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|----------------|---------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| image          | functional-572000 image save --daemon | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:12 PDT | 14 Oct 24 07:12 PDT |
	|                | kicbase/echo-server:functional-572000 |                   |                   |         |                     |                     |
	|                | --alsologtostderr                     |                   |                   |         |                     |                     |
	| start          | -p functional-572000                  | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:12 PDT |                     |
	|                | --dry-run --memory                    |                   |                   |         |                     |                     |
	|                | 250MB --alsologtostderr               |                   |                   |         |                     |                     |
	|                | --driver=hyperv                       |                   |                   |         |                     |                     |
	| start          | -p functional-572000                  | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:12 PDT |                     |
	|                | --dry-run --memory                    |                   |                   |         |                     |                     |
	|                | 250MB --alsologtostderr               |                   |                   |         |                     |                     |
	|                | --driver=hyperv                       |                   |                   |         |                     |                     |
	| ssh            | functional-572000 ssh sudo cat        | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:12 PDT | 14 Oct 24 07:12 PDT |
	|                | /etc/ssl/certs/936.pem                |                   |                   |         |                     |                     |
	| ssh            | functional-572000 ssh sudo cat        | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:12 PDT | 14 Oct 24 07:12 PDT |
	|                | /usr/share/ca-certificates/936.pem    |                   |                   |         |                     |                     |
	| docker-env     | functional-572000 docker-env          | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:12 PDT | 14 Oct 24 07:12 PDT |
	| ssh            | functional-572000 ssh sudo cat        | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:12 PDT | 14 Oct 24 07:12 PDT |
	|                | /etc/test/nested/copy/936/hosts       |                   |                   |         |                     |                     |
	| ssh            | functional-572000 ssh sudo cat        | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:12 PDT | 14 Oct 24 07:12 PDT |
	|                | /etc/ssl/certs/51391683.0             |                   |                   |         |                     |                     |
	| dashboard      | --url --port 36195                    | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:12 PDT |                     |
	|                | -p functional-572000                  |                   |                   |         |                     |                     |
	|                | --alsologtostderr -v=1                |                   |                   |         |                     |                     |
	| ssh            | functional-572000 ssh sudo cat        | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:12 PDT | 14 Oct 24 07:13 PDT |
	|                | /etc/ssl/certs/9362.pem               |                   |                   |         |                     |                     |
	| ssh            | functional-572000 ssh sudo cat        | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:13 PDT | 14 Oct 24 07:13 PDT |
	|                | /usr/share/ca-certificates/9362.pem   |                   |                   |         |                     |                     |
	| docker-env     | functional-572000 docker-env          | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:13 PDT | 14 Oct 24 07:13 PDT |
	| ssh            | functional-572000 ssh sudo cat        | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:13 PDT | 14 Oct 24 07:13 PDT |
	|                | /etc/ssl/certs/3ec20f2e.0             |                   |                   |         |                     |                     |
	| image          | functional-572000                     | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:13 PDT | 14 Oct 24 07:13 PDT |
	|                | image ls --format short               |                   |                   |         |                     |                     |
	|                | --alsologtostderr                     |                   |                   |         |                     |                     |
	| image          | functional-572000                     | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:13 PDT | 14 Oct 24 07:13 PDT |
	|                | image ls --format yaml                |                   |                   |         |                     |                     |
	|                | --alsologtostderr                     |                   |                   |         |                     |                     |
	| ssh            | functional-572000 ssh pgrep           | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:13 PDT |                     |
	|                | buildkitd                             |                   |                   |         |                     |                     |
	| image          | functional-572000                     | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:13 PDT | 14 Oct 24 07:13 PDT |
	|                | image ls --format json                |                   |                   |         |                     |                     |
	|                | --alsologtostderr                     |                   |                   |         |                     |                     |
	| image          | functional-572000 image build -t      | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:13 PDT | 14 Oct 24 07:13 PDT |
	|                | localhost/my-image:functional-572000  |                   |                   |         |                     |                     |
	|                | testdata\build --alsologtostderr      |                   |                   |         |                     |                     |
	| image          | functional-572000                     | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:13 PDT | 14 Oct 24 07:13 PDT |
	|                | image ls --format table               |                   |                   |         |                     |                     |
	|                | --alsologtostderr                     |                   |                   |         |                     |                     |
	| update-context | functional-572000                     | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:13 PDT | 14 Oct 24 07:13 PDT |
	|                | update-context                        |                   |                   |         |                     |                     |
	|                | --alsologtostderr -v=2                |                   |                   |         |                     |                     |
	| image          | functional-572000 image ls            | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:13 PDT | 14 Oct 24 07:14 PDT |
	| update-context | functional-572000                     | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:13 PDT | 14 Oct 24 07:14 PDT |
	|                | update-context                        |                   |                   |         |                     |                     |
	|                | --alsologtostderr -v=2                |                   |                   |         |                     |                     |
	| update-context | functional-572000                     | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:13 PDT | 14 Oct 24 07:14 PDT |
	|                | update-context                        |                   |                   |         |                     |                     |
	|                | --alsologtostderr -v=2                |                   |                   |         |                     |                     |
	| delete         | -p functional-572000                  | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:17 PDT | 14 Oct 24 07:19 PDT |
	| start          | -p ha-132600 --wait=true              | ha-132600         | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:19 PDT |                     |
	|                | --memory=2200 --ha                    |                   |                   |         |                     |                     |
	|                | -v=7 --alsologtostderr                |                   |                   |         |                     |                     |
	|                | --driver=hyperv                       |                   |                   |         |                     |                     |
	|----------------|---------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 07:19:00
	Running on machine: minikube1
	Binary: Built with gc go1.23.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 07:19:00.342865   13076 out.go:345] Setting OutFile to fd 1228 ...
	I1014 07:19:00.344546   13076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:19:00.344546   13076 out.go:358] Setting ErrFile to fd 824...
	I1014 07:19:00.344546   13076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:19:00.368247   13076 out.go:352] Setting JSON to false
	I1014 07:19:00.372277   13076 start.go:129] hostinfo: {"hostname":"minikube1","uptime":101054,"bootTime":1728814485,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5011 Build 19045.5011","kernelVersion":"10.0.19045.5011 Build 19045.5011","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1014 07:19:00.372803   13076 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:19:00.379728   13076 out.go:177] * [ha-132600] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	I1014 07:19:00.386820   13076 notify.go:220] Checking for updates...
	I1014 07:19:00.389571   13076 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:19:00.392679   13076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:19:00.395167   13076 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1014 07:19:00.398285   13076 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:19:00.400520   13076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:19:00.404118   13076 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:19:05.654471   13076 out.go:177] * Using the hyperv driver based on user configuration
	I1014 07:19:05.658285   13076 start.go:297] selected driver: hyperv
	I1014 07:19:05.658285   13076 start.go:901] validating driver "hyperv" against <nil>
	I1014 07:19:05.658285   13076 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:19:05.705211   13076 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 07:19:05.706541   13076 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:19:05.706541   13076 cni.go:84] Creating CNI manager for ""
	I1014 07:19:05.706541   13076 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1014 07:19:05.706541   13076 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 07:19:05.707066   13076 start.go:340] cluster config:
	{Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:19:05.707207   13076 iso.go:125] acquiring lock: {Name:mkb71635bcbccba29dce9048ad4cc71430a7e577 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:19:05.711264   13076 out.go:177] * Starting "ha-132600" primary control-plane node in "ha-132600" cluster
	I1014 07:19:05.715097   13076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:19:05.715262   13076 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1014 07:19:05.715344   13076 cache.go:56] Caching tarball of preloaded images
	I1014 07:19:05.715452   13076 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1014 07:19:05.715452   13076 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:19:05.716553   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:19:05.716898   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json: {Name:mkbd11bf3f90adebf1f4630d2e8deaac328748d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:19:05.717886   13076 start.go:360] acquireMachinesLock for ha-132600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:19:05.717886   13076 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-132600"
	I1014 07:19:05.718562   13076 start.go:93] Provisioning new machine with config: &{Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:19:05.718562   13076 start.go:125] createHost starting for "" (driver="hyperv")
	I1014 07:19:05.721891   13076 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:19:05.722555   13076 start.go:159] libmachine.API.Create for "ha-132600" (driver="hyperv")
	I1014 07:19:05.722555   13076 client.go:168] LocalClient.Create starting
	I1014 07:19:05.722555   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I1014 07:19:05.723316   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:19:05.723316   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:19:05.723316   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I1014 07:19:05.723877   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:19:05.723955   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:19:05.723955   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1014 07:19:07.752927   13076 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1014 07:19:07.753576   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:07.753793   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1014 07:19:09.445838   13076 main.go:141] libmachine: [stdout =====>] : False
	
	I1014 07:19:09.445838   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:09.446315   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:19:10.901034   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:19:10.901034   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:10.901034   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:19:14.320078   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:19:14.320309   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:14.323253   13076 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 07:19:14.829443   13076 main.go:141] libmachine: Creating SSH key...
	I1014 07:19:14.988345   13076 main.go:141] libmachine: Creating VM...
	I1014 07:19:14.988345   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:19:17.790932   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:19:17.791005   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:17.791145   13076 main.go:141] libmachine: Using switch "Default Switch"
	I1014 07:19:17.791145   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:19:19.529861   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:19:19.529861   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:19.529861   13076 main.go:141] libmachine: Creating VHD
	I1014 07:19:19.530436   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\fixed.vhd' -SizeBytes 10MB -Fixed
	I1014 07:19:23.105904   13076 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 267BC11D-A195-4636-8AE9-EF1D7966C95C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1014 07:19:23.105987   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:23.106037   13076 main.go:141] libmachine: Writing magic tar header
	I1014 07:19:23.106037   13076 main.go:141] libmachine: Writing SSH key tar header
	I1014 07:19:23.118040   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\disk.vhd' -VHDType Dynamic -DeleteSource
	I1014 07:19:26.201609   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:26.201950   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:26.202023   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\disk.vhd' -SizeBytes 20000MB
	I1014 07:19:28.768368   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:28.768789   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:28.768969   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-132600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1014 07:19:32.226690   13076 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-132600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1014 07:19:32.226915   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:32.226915   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-132600 -DynamicMemoryEnabled $false
	I1014 07:19:34.390731   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:34.390837   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:34.391049   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-132600 -Count 2
	I1014 07:19:36.449382   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:36.449628   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:36.449628   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-132600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\boot2docker.iso'
	I1014 07:19:38.932909   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:38.932909   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:38.932909   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-132600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\disk.vhd'
	I1014 07:19:41.509229   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:41.509229   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:41.509229   13076 main.go:141] libmachine: Starting VM...
	I1014 07:19:41.509229   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-132600
	I1014 07:19:44.718680   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:44.718680   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:44.719475   13076 main.go:141] libmachine: Waiting for host to start...
	I1014 07:19:44.719770   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:19:46.954823   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:19:46.954823   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:46.955829   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:19:49.408227   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:49.408227   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:50.409369   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:19:52.561607   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:19:52.561912   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:52.561912   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:19:55.081136   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:55.081187   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:56.082401   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:19:58.271497   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:19:58.271497   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:58.272384   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:00.740863   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:20:00.741070   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:01.741536   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:03.902654   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:03.902721   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:03.902721   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:06.378064   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:20:06.378064   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:07.379104   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:09.537329   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:09.537329   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:09.537541   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:12.074320   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:12.074320   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:12.074834   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:14.171613   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:14.171613   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:14.171613   13076 machine.go:93] provisionDockerMachine start ...
	I1014 07:20:14.172377   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:16.225265   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:16.225265   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:16.225634   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:18.692061   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:18.692061   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:18.698261   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:18.713495   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:18.713568   13076 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 07:20:18.839311   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 07:20:18.839517   13076 buildroot.go:166] provisioning hostname "ha-132600"
	I1014 07:20:18.839517   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:20.911254   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:20.911424   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:20.911601   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:23.370138   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:23.370756   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:23.377008   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:23.377773   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:23.377773   13076 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-132600 && echo "ha-132600" | sudo tee /etc/hostname
	I1014 07:20:23.537548   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-132600
	
	I1014 07:20:23.537548   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:25.571429   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:25.571429   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:25.572326   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:28.068677   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:28.068677   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:28.077026   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:28.078615   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:28.078615   13076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-132600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-132600/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-132600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 07:20:28.216809   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 07:20:28.216874   13076 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1014 07:20:28.216990   13076 buildroot.go:174] setting up certificates
	I1014 07:20:28.217016   13076 provision.go:84] configureAuth start
	I1014 07:20:28.217047   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:30.246758   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:30.247093   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:30.247368   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:32.688428   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:32.688428   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:32.688428   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:34.748908   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:34.748908   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:34.749349   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:37.221628   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:37.221798   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:37.221798   13076 provision.go:143] copyHostCerts
	I1014 07:20:37.221998   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1014 07:20:37.222444   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1014 07:20:37.222558   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1014 07:20:37.223110   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1014 07:20:37.224109   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1014 07:20:37.224599   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1014 07:20:37.224599   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1014 07:20:37.224959   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I1014 07:20:37.225332   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1014 07:20:37.225332   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1014 07:20:37.225332   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1014 07:20:37.226765   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1014 07:20:37.227733   13076 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-132600 san=[127.0.0.1 172.20.108.120 ha-132600 localhost minikube]
	I1014 07:20:37.731674   13076 provision.go:177] copyRemoteCerts
	I1014 07:20:37.744706   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 07:20:37.744706   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:39.801309   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:39.801309   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:39.801309   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:42.273306   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:42.273367   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:42.273367   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:20:42.378794   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6340227s)
	I1014 07:20:42.378847   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1014 07:20:42.378847   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I1014 07:20:42.424772   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1014 07:20:42.425289   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 07:20:42.471397   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1014 07:20:42.471964   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 07:20:42.527104   13076 provision.go:87] duration metric: took 14.3100395s to configureAuth
	I1014 07:20:42.527104   13076 buildroot.go:189] setting minikube options for container-runtime
	I1014 07:20:42.527712   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:20:42.528252   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:44.598308   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:44.598308   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:44.599305   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:47.040424   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:47.040637   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:47.045726   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:47.046398   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:47.046398   13076 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1014 07:20:47.167710   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1014 07:20:47.167801   13076 buildroot.go:70] root file system type: tmpfs
	I1014 07:20:47.168208   13076 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1014 07:20:47.168315   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:49.202111   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:49.202331   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:49.202424   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:51.648278   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:51.648278   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:51.654248   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:51.655052   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:51.655052   13076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1014 07:20:51.814048   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1014 07:20:51.814229   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:53.871643   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:53.871643   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:53.872030   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:56.286054   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:56.286054   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:56.292357   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:56.293125   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:56.293125   13076 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1014 07:20:58.458028   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1014 07:20:58.458028   13076 machine.go:96] duration metric: took 44.286361s to provisionDockerMachine
	I1014 07:20:58.458028   13076 client.go:171] duration metric: took 1m52.7353371s to LocalClient.Create
	I1014 07:20:58.458028   13076 start.go:167] duration metric: took 1m52.7353371s to libmachine.API.Create "ha-132600"
	I1014 07:20:58.458028   13076 start.go:293] postStartSetup for "ha-132600" (driver="hyperv")
	I1014 07:20:58.458028   13076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 07:20:58.471262   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 07:20:58.471262   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:00.512590   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:00.513162   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:00.513321   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:02.924795   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:02.924929   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:02.925163   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:21:03.032582   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5613141s)
	I1014 07:21:03.044373   13076 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 07:21:03.050831   13076 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 07:21:03.050831   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1014 07:21:03.051271   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1014 07:21:03.052198   13076 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> 9362.pem in /etc/ssl/certs
	I1014 07:21:03.052295   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /etc/ssl/certs/9362.pem
	I1014 07:21:03.063873   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 07:21:03.081968   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /etc/ssl/certs/9362.pem (1708 bytes)
	I1014 07:21:03.128637   13076 start.go:296] duration metric: took 4.6706031s for postStartSetup
	I1014 07:21:03.132031   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:05.196102   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:05.196622   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:05.196696   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:07.624940   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:07.625781   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:07.626103   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:21:07.629046   13076 start.go:128] duration metric: took 2m1.9103356s to createHost
	I1014 07:21:07.629046   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:09.687162   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:09.687420   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:09.687490   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:12.104556   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:12.104556   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:12.110165   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:21:12.110572   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:21:12.110572   13076 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 07:21:12.241524   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728915672.238854784
	
	I1014 07:21:12.241655   13076 fix.go:216] guest clock: 1728915672.238854784
	I1014 07:21:12.241726   13076 fix.go:229] Guest: 2024-10-14 07:21:12.238854784 -0700 PDT Remote: 2024-10-14 07:21:07.6290467 -0700 PDT m=+127.381500801 (delta=4.609808084s)
	I1014 07:21:12.241841   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:14.299338   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:14.299599   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:14.299599   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:16.767477   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:16.767665   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:16.775565   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:21:16.775565   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:21:16.775565   13076 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1728915672
	I1014 07:21:16.921038   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 14 14:21:12 UTC 2024
	
	I1014 07:21:16.921096   13076 fix.go:236] clock set: Mon Oct 14 14:21:12 UTC 2024
	 (err=<nil>)
	I1014 07:21:16.921169   13076 start.go:83] releasing machines lock for "ha-132600", held for 2m11.2030498s
	I1014 07:21:16.921319   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:18.988904   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:18.989898   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:18.989947   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:21.410167   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:21.410403   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:21.414482   13076 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1014 07:21:21.414683   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:21.423844   13076 ssh_runner.go:195] Run: cat /version.json
	I1014 07:21:21.423844   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:23.554001   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:23.554206   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:23.554362   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:23.565882   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:23.565882   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:23.565882   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:26.092249   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:26.092249   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:26.092379   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:21:26.119019   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:26.119086   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:26.119215   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:21:26.179573   13076 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.764971s)
	W1014 07:21:26.179670   13076 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1014 07:21:26.212879   13076 ssh_runner.go:235] Completed: cat /version.json: (4.7890281s)
	I1014 07:21:26.223815   13076 ssh_runner.go:195] Run: systemctl --version
	I1014 07:21:26.243251   13076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 07:21:26.251321   13076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 07:21:26.262431   13076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 07:21:26.290027   13076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 07:21:26.290027   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:21:26.290027   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1014 07:21:26.296743   13076 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1014 07:21:26.296743   13076 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1014 07:21:26.340326   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1014 07:21:26.370362   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1014 07:21:26.391878   13076 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 07:21:26.402847   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 07:21:26.433241   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:21:26.462619   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 07:21:26.491735   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:21:26.520404   13076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 07:21:26.550615   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 07:21:26.580278   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 07:21:26.608564   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 07:21:26.636849   13076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 07:21:26.656448   13076 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 07:21:26.667504   13076 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 07:21:26.700184   13076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 07:21:26.726834   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:26.916163   13076 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 07:21:26.953200   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:21:26.967627   13076 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1014 07:21:27.005080   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:21:27.039193   13076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 07:21:27.083701   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:21:27.118670   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:21:27.153508   13076 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 07:21:27.209689   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:21:27.230765   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:21:27.273213   13076 ssh_runner.go:195] Run: which cri-dockerd
	I1014 07:21:27.291783   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1014 07:21:27.308581   13076 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1014 07:21:27.351709   13076 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1014 07:21:27.539571   13076 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1014 07:21:27.725567   13076 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1014 07:21:27.725567   13076 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1014 07:21:27.768723   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:27.951142   13076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:21:30.486027   13076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5347969s)
	I1014 07:21:30.497722   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1014 07:21:30.531680   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 07:21:30.563742   13076 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1014 07:21:30.767218   13076 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1014 07:21:30.967093   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:31.191192   13076 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1014 07:21:31.229757   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 07:21:31.267426   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:31.462408   13076 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1014 07:21:31.569509   13076 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1014 07:21:31.581151   13076 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1014 07:21:31.591322   13076 start.go:563] Will wait 60s for crictl version
	I1014 07:21:31.604845   13076 ssh_runner.go:195] Run: which crictl
	I1014 07:21:31.622874   13076 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 07:21:31.680621   13076 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1014 07:21:31.689423   13076 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 07:21:31.732594   13076 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 07:21:31.766375   13076 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1014 07:21:31.766375   13076 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:08:65:4d Flags:up|broadcast|multicast|running}
	I1014 07:21:31.773955   13076 ip.go:214] interface addr: fe80::b548:ff79:d464:75b8/64
	I1014 07:21:31.773955   13076 ip.go:214] interface addr: 172.20.96.1/20
	I1014 07:21:31.784171   13076 ssh_runner.go:195] Run: grep 172.20.96.1	host.minikube.internal$ /etc/hosts
	I1014 07:21:31.791192   13076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 07:21:31.825649   13076 kubeadm.go:883] updating cluster {Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 07:21:31.825649   13076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:21:31.834667   13076 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 07:21:31.867707   13076 docker.go:689] Got preloaded images: 
	I1014 07:21:31.867707   13076 docker.go:695] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I1014 07:21:31.880238   13076 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1014 07:21:31.908893   13076 ssh_runner.go:195] Run: which lz4
	I1014 07:21:31.914374   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1014 07:21:31.924715   13076 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 07:21:31.930846   13076 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 07:21:31.931075   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I1014 07:21:33.829320   13076 docker.go:653] duration metric: took 1.9148657s to copy over tarball
	I1014 07:21:33.840382   13076 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 07:21:42.539043   13076 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6986506s)
	I1014 07:21:42.539174   13076 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 07:21:42.611065   13076 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1014 07:21:42.636639   13076 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I1014 07:21:42.681041   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:42.879066   13076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:21:46.161713   13076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.2824935s)
	I1014 07:21:46.173131   13076 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 07:21:46.198645   13076 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1014 07:21:46.198645   13076 cache_images.go:84] Images are preloaded, skipping loading
	I1014 07:21:46.198645   13076 kubeadm.go:934] updating node { 172.20.108.120 8443 v1.31.1 docker true true} ...
	I1014 07:21:46.198645   13076 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-132600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.108.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 07:21:46.209892   13076 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1014 07:21:46.283733   13076 cni.go:84] Creating CNI manager for ""
	I1014 07:21:46.283861   13076 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 07:21:46.283932   13076 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 07:21:46.284000   13076 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.108.120 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-132600 NodeName:ha-132600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.108.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.108.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 07:21:46.284290   13076 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.108.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-132600"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.20.108.120"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.108.120"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 07:21:46.284367   13076 kube-vip.go:115] generating kube-vip config ...
	I1014 07:21:46.295693   13076 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1014 07:21:46.324017   13076 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1014 07:21:46.324230   13076 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.20.111.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1014 07:21:46.334184   13076 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 07:21:46.349321   13076 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 07:21:46.360771   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 07:21:46.379224   13076 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I1014 07:21:46.412080   13076 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 07:21:46.441691   13076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1014 07:21:46.479025   13076 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1014 07:21:46.519461   13076 ssh_runner.go:195] Run: grep 172.20.111.254	control-plane.minikube.internal$ /etc/hosts
	I1014 07:21:46.525021   13076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.111.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 07:21:46.554943   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:46.730466   13076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 07:21:46.761153   13076 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600 for IP: 172.20.108.120
	I1014 07:21:46.761153   13076 certs.go:194] generating shared ca certs ...
	I1014 07:21:46.761153   13076 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:46.761825   13076 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I1014 07:21:46.762680   13076 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I1014 07:21:46.762846   13076 certs.go:256] generating profile certs ...
	I1014 07:21:46.763605   13076 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.key
	I1014 07:21:46.763605   13076 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.crt with IP's: []
	I1014 07:21:46.955865   13076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.crt ...
	I1014 07:21:46.955865   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.crt: {Name:mk4a5e51a4b9d62279c7ef4ef47716bcdf88d704 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:46.957857   13076 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.key ...
	I1014 07:21:46.957857   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.key: {Name:mkc80e99cfe4d8d17198a87fb188cfa1a72d727d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:46.958881   13076 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61
	I1014 07:21:46.958881   13076 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.108.120 172.20.111.254]
	I1014 07:21:47.375660   13076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61 ...
	I1014 07:21:47.375660   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61: {Name:mke858ab0ac103663123708f9814cfdffe03211e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.377216   13076 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61 ...
	I1014 07:21:47.377216   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61: {Name:mk972fca892a193d6b4e84d42e27ed86ce9f0457 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.377811   13076 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt
	I1014 07:21:47.391806   13076 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key
	I1014 07:21:47.392823   13076 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key
	I1014 07:21:47.392823   13076 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt with IP's: []
	I1014 07:21:47.675751   13076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt ...
	I1014 07:21:47.675751   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt: {Name:mk5a9f1239213dd0a0f36b4ee41d5ac56cbd79c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.677918   13076 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key ...
	I1014 07:21:47.677918   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key: {Name:mkb931094180fef1516b0d43acb6430ecbcbb3fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 07:21:47.679943   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 07:21:47.679943   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 07:21:47.679943   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 07:21:47.691415   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 07:21:47.693468   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem (1338 bytes)
	W1014 07:21:47.693468   13076 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936_empty.pem, impossibly tiny 0 bytes
	I1014 07:21:47.694004   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1014 07:21:47.694400   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1014 07:21:47.694671   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1014 07:21:47.695091   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1014 07:21:47.695960   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem (1708 bytes)
	I1014 07:21:47.696145   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem -> /usr/share/ca-certificates/936.pem
	I1014 07:21:47.696401   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /usr/share/ca-certificates/9362.pem
	I1014 07:21:47.696580   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:47.697580   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 07:21:47.742705   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 07:21:47.790988   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 07:21:47.837975   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 07:21:47.884872   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 07:21:47.936878   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 07:21:47.985330   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 07:21:48.029181   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 07:21:48.078824   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem --> /usr/share/ca-certificates/936.pem (1338 bytes)
	I1014 07:21:48.127907   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /usr/share/ca-certificates/9362.pem (1708 bytes)
	I1014 07:21:48.174542   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 07:21:48.218635   13076 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 07:21:48.264009   13076 ssh_runner.go:195] Run: openssl version
	I1014 07:21:48.282939   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 07:21:48.310256   13076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:48.318050   13076 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:44 /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:48.328768   13076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:48.348090   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 07:21:48.376484   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936.pem && ln -fs /usr/share/ca-certificates/936.pem /etc/ssl/certs/936.pem"
	I1014 07:21:48.408494   13076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936.pem
	I1014 07:21:48.416513   13076 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 14:00 /usr/share/ca-certificates/936.pem
	I1014 07:21:48.427815   13076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936.pem
	I1014 07:21:48.447033   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/936.pem /etc/ssl/certs/51391683.0"
	I1014 07:21:48.474293   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9362.pem && ln -fs /usr/share/ca-certificates/9362.pem /etc/ssl/certs/9362.pem"
	I1014 07:21:48.502293   13076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9362.pem
	I1014 07:21:48.509688   13076 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 14:00 /usr/share/ca-certificates/9362.pem
	I1014 07:21:48.519811   13076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9362.pem
	I1014 07:21:48.540403   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9362.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 07:21:48.569007   13076 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 07:21:48.575758   13076 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 07:21:48.576118   13076 kubeadm.go:392] StartCluster: {Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clu
sterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:21:48.584978   13076 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1014 07:21:48.619430   13076 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 07:21:48.647606   13076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 07:21:48.676291   13076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 07:21:48.693679   13076 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 07:21:48.693679   13076 kubeadm.go:157] found existing configuration files:
	
	I1014 07:21:48.703613   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 07:21:48.722067   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 07:21:48.732618   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 07:21:48.759836   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 07:21:48.777622   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 07:21:48.789748   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 07:21:48.818512   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 07:21:48.837396   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 07:21:48.848696   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 07:21:48.876926   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 07:21:48.893530   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 07:21:48.904372   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 07:21:48.920210   13076 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 07:21:49.392185   13076 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 07:22:04.140256   13076 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 07:22:04.140412   13076 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 07:22:04.140501   13076 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 07:22:04.140848   13076 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 07:22:04.141206   13076 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 07:22:04.141311   13076 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 07:22:04.144180   13076 out.go:235]   - Generating certificates and keys ...
	I1014 07:22:04.144373   13076 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 07:22:04.144575   13076 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 07:22:04.144939   13076 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 07:22:04.145172   13076 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1014 07:22:04.145315   13076 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1014 07:22:04.145521   13076 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1014 07:22:04.145845   13076 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1014 07:22:04.146212   13076 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-132600 localhost] and IPs [172.20.108.120 127.0.0.1 ::1]
	I1014 07:22:04.146365   13076 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1014 07:22:04.146614   13076 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-132600 localhost] and IPs [172.20.108.120 127.0.0.1 ::1]
	I1014 07:22:04.147003   13076 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 07:22:04.147200   13076 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 07:22:04.147236   13076 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1014 07:22:04.147236   13076 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 07:22:04.147236   13076 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 07:22:04.147236   13076 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 07:22:04.147954   13076 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 07:22:04.147987   13076 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 07:22:04.147987   13076 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 07:22:04.147987   13076 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 07:22:04.148785   13076 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 07:22:04.151235   13076 out.go:235]   - Booting up control plane ...
	I1014 07:22:04.151235   13076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 07:22:04.151819   13076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 07:22:04.152046   13076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 07:22:04.152333   13076 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 07:22:04.153426   13076 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 523.263843ms
	I1014 07:22:04.153534   13076 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 07:22:04.153534   13076 kubeadm.go:310] [api-check] The API server is healthy after 9.176461865s
	I1014 07:22:04.153534   13076 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 07:22:04.154243   13076 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 07:22:04.154243   13076 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 07:22:04.154779   13076 kubeadm.go:310] [mark-control-plane] Marking the node ha-132600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 07:22:04.154779   13076 kubeadm.go:310] [bootstrap-token] Using token: rm3d8a.inqsemtt5b5l5x2e
	I1014 07:22:04.157926   13076 out.go:235]   - Configuring RBAC rules ...
	I1014 07:22:04.158760   13076 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 07:22:04.158928   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 07:22:04.158928   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 07:22:04.159656   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 07:22:04.159851   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 07:22:04.160089   13076 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 07:22:04.160270   13076 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 07:22:04.160529   13076 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 07:22:04.160529   13076 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 07:22:04.160529   13076 kubeadm.go:310] 
	I1014 07:22:04.160529   13076 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 07:22:04.160529   13076 kubeadm.go:310] 
	I1014 07:22:04.160529   13076 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 07:22:04.160529   13076 kubeadm.go:310] 
	I1014 07:22:04.161100   13076 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 07:22:04.161186   13076 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 07:22:04.161352   13076 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 07:22:04.161352   13076 kubeadm.go:310] 
	I1014 07:22:04.161534   13076 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 07:22:04.161534   13076 kubeadm.go:310] 
	I1014 07:22:04.161706   13076 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 07:22:04.161706   13076 kubeadm.go:310] 
	I1014 07:22:04.161950   13076 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 07:22:04.161950   13076 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 07:22:04.161950   13076 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 07:22:04.161950   13076 kubeadm.go:310] 
	I1014 07:22:04.162515   13076 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 07:22:04.162607   13076 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 07:22:04.162607   13076 kubeadm.go:310] 
	I1014 07:22:04.162607   13076 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rm3d8a.inqsemtt5b5l5x2e \
	I1014 07:22:04.162607   13076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f876f951efbe7f1ed47ca578ee0d33e6ce8fe69c1a008a8154ee00f56ac84e46 \
	I1014 07:22:04.163162   13076 kubeadm.go:310] 	--control-plane 
	I1014 07:22:04.163261   13076 kubeadm.go:310] 
	I1014 07:22:04.163340   13076 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 07:22:04.163340   13076 kubeadm.go:310] 
	I1014 07:22:04.163570   13076 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rm3d8a.inqsemtt5b5l5x2e \
	I1014 07:22:04.163570   13076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f876f951efbe7f1ed47ca578ee0d33e6ce8fe69c1a008a8154ee00f56ac84e46 
	I1014 07:22:04.163570   13076 cni.go:84] Creating CNI manager for ""
	I1014 07:22:04.163570   13076 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 07:22:04.167124   13076 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1014 07:22:04.179134   13076 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 07:22:04.187701   13076 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1014 07:22:04.187701   13076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1014 07:22:04.233053   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1014 07:22:04.912894   13076 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 07:22:04.925574   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-132600 minikube.k8s.io/updated_at=2024_10_14T07_22_04_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=ha-132600 minikube.k8s.io/primary=true
	I1014 07:22:04.927795   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:04.958241   13076 ops.go:34] apiserver oom_adj: -16
	I1014 07:22:05.168034   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:05.667048   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:06.166157   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:06.667561   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:07.167283   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:07.305312   13076 kubeadm.go:1113] duration metric: took 2.3924152s to wait for elevateKubeSystemPrivileges
	I1014 07:22:07.306284   13076 kubeadm.go:394] duration metric: took 18.730142s to StartCluster
	I1014 07:22:07.306284   13076 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:22:07.306284   13076 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:22:07.308077   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:22:07.309777   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 07:22:07.309777   13076 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:22:07.309953   13076 start.go:241] waiting for startup goroutines ...
	I1014 07:22:07.309868   13076 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 07:22:07.310143   13076 addons.go:69] Setting storage-provisioner=true in profile "ha-132600"
	I1014 07:22:07.310235   13076 addons.go:234] Setting addon storage-provisioner=true in "ha-132600"
	I1014 07:22:07.310143   13076 addons.go:69] Setting default-storageclass=true in profile "ha-132600"
	I1014 07:22:07.310402   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:22:07.310402   13076 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-132600"
	I1014 07:22:07.310402   13076 host.go:66] Checking if "ha-132600" exists ...
	I1014 07:22:07.311457   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:07.311926   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:07.495748   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.96.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 07:22:08.130501   13076 start.go:971] {"host.minikube.internal": 172.20.96.1} host record injected into CoreDNS's ConfigMap
	I1014 07:22:09.594501   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:09.594501   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:09.594786   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:09.594878   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:09.595740   13076 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:22:09.596691   13076 kapi.go:59] client config for ha-132600: &rest.Config{Host:"https://172.20.111.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-132600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-132600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2926ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 07:22:09.597939   13076 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:22:09.598267   13076 cert_rotation.go:140] Starting client certificate rotation controller
	I1014 07:22:09.598703   13076 addons.go:234] Setting addon default-storageclass=true in "ha-132600"
	I1014 07:22:09.598956   13076 host.go:66] Checking if "ha-132600" exists ...
	I1014 07:22:09.600143   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:09.600143   13076 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 07:22:09.600247   13076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 07:22:09.600352   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:11.901872   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:11.901872   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:11.902855   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:22:11.913525   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:11.914554   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:11.914725   13076 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 07:22:11.914841   13076 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 07:22:11.915002   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:14.163865   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:14.163865   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:14.163865   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:22:14.597752   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:22:14.597752   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:14.598273   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:22:14.755906   13076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 07:22:16.766331   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:22:16.766798   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:16.767079   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:22:16.907784   13076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 07:22:17.107355   13076 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 07:22:17.107355   13076 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 07:22:17.108322   13076 round_trippers.go:463] GET https://172.20.111.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1014 07:22:17.108322   13076 round_trippers.go:469] Request Headers:
	I1014 07:22:17.108322   13076 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:22:17.108322   13076 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:22:17.120122   13076 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1014 07:22:17.120927   13076 round_trippers.go:463] PUT https://172.20.111.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1014 07:22:17.121006   13076 round_trippers.go:469] Request Headers:
	I1014 07:22:17.121006   13076 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:22:17.121006   13076 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:22:17.121089   13076 round_trippers.go:473]     Content-Type: application/json
	I1014 07:22:17.124876   13076 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 07:22:17.128782   13076 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1014 07:22:17.133792   13076 addons.go:510] duration metric: took 9.8239112s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1014 07:22:17.133792   13076 start.go:246] waiting for cluster config update ...
	I1014 07:22:17.133792   13076 start.go:255] writing updated cluster config ...
	I1014 07:22:17.140793   13076 out.go:201] 
	I1014 07:22:17.150794   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:22:17.151784   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:22:17.157758   13076 out.go:177] * Starting "ha-132600-m02" control-plane node in "ha-132600" cluster
	I1014 07:22:17.160381   13076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:22:17.160381   13076 cache.go:56] Caching tarball of preloaded images
	I1014 07:22:17.160381   13076 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1014 07:22:17.160381   13076 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:22:17.160381   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:22:17.168672   13076 start.go:360] acquireMachinesLock for ha-132600-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:22:17.169194   13076 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-132600-m02"
	I1014 07:22:17.169334   13076 start.go:93] Provisioning new machine with config: &{Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:22:17.169334   13076 start.go:125] createHost starting for "m02" (driver="hyperv")
	I1014 07:22:17.171957   13076 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:22:17.171957   13076 start.go:159] libmachine.API.Create for "ha-132600" (driver="hyperv")
	I1014 07:22:17.171957   13076 client.go:168] LocalClient.Create starting
	I1014 07:22:17.171957   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:22:17.173802   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:22:17.173802   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1014 07:22:19.094383   13076 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1014 07:22:19.094383   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:19.094970   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1014 07:22:20.808796   13076 main.go:141] libmachine: [stdout =====>] : False
	
	I1014 07:22:20.808796   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:20.808897   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:22:22.292959   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:22:22.292959   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:22.293074   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:22:25.811891   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:22:25.811891   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:25.813583   13076 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 07:22:26.334218   13076 main.go:141] libmachine: Creating SSH key...
	I1014 07:22:26.579833   13076 main.go:141] libmachine: Creating VM...
	I1014 07:22:26.580305   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:22:29.402237   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:22:29.402237   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:29.402237   13076 main.go:141] libmachine: Using switch "Default Switch"
	I1014 07:22:29.402237   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:22:31.181025   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:22:31.181025   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:31.181025   13076 main.go:141] libmachine: Creating VHD
	I1014 07:22:31.181025   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I1014 07:22:34.986705   13076 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 9BEBB030-6569-4A5D-A43D-C976E242B24D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1014 07:22:34.986953   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:34.986953   13076 main.go:141] libmachine: Writing magic tar header
	I1014 07:22:34.986953   13076 main.go:141] libmachine: Writing SSH key tar header
	I1014 07:22:34.998864   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I1014 07:22:38.117497   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:38.117497   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:38.117497   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\disk.vhd' -SizeBytes 20000MB
	I1014 07:22:40.750048   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:40.750048   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:40.750048   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-132600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1014 07:22:44.293371   13076 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-132600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1014 07:22:44.294009   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:44.294158   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-132600-m02 -DynamicMemoryEnabled $false
	I1014 07:22:46.495846   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:46.496106   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:46.496211   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-132600-m02 -Count 2
	I1014 07:22:48.637702   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:48.637702   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:48.637702   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-132600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\boot2docker.iso'
	I1014 07:22:51.128606   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:51.128606   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:51.128606   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-132600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\disk.vhd'
	I1014 07:22:53.765202   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:53.765775   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:53.765775   13076 main.go:141] libmachine: Starting VM...
	I1014 07:22:53.765847   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-132600-m02
	I1014 07:22:56.961396   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:56.961396   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:56.962040   13076 main.go:141] libmachine: Waiting for host to start...
	I1014 07:22:56.962040   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:22:59.251592   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:59.251592   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:59.251838   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:01.758067   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:01.758159   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:02.758293   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:04.921490   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:04.921565   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:04.921802   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:07.441595   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:07.441595   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:08.443053   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:10.659721   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:10.659721   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:10.660418   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:13.126618   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:13.126952   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:14.127183   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:16.307414   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:16.307414   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:16.307500   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:18.748902   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:18.749678   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:19.749962   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:21.893323   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:21.893323   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:21.893323   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:24.464351   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:24.465392   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:24.465471   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:26.567626   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:26.567626   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:26.567626   13076 machine.go:93] provisionDockerMachine start ...
	I1014 07:23:26.568400   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:28.681591   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:28.681591   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:28.682618   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:31.178308   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:31.178308   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:31.192309   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:31.207523   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:31.208024   13076 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 07:23:31.337861   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 07:23:31.337861   13076 buildroot.go:166] provisioning hostname "ha-132600-m02"
	I1014 07:23:31.337941   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:33.402839   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:33.402839   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:33.403340   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:35.894312   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:35.894312   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:35.901210   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:35.901699   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:35.901823   13076 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-132600-m02 && echo "ha-132600-m02" | sudo tee /etc/hostname
	I1014 07:23:36.071040   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-132600-m02
	
	I1014 07:23:36.071040   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:38.150707   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:38.151729   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:38.151729   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:40.639242   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:40.639242   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:40.644598   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:40.645419   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:40.645490   13076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-132600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-132600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-132600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 07:23:40.801974   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 07:23:40.801974   13076 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1014 07:23:40.801974   13076 buildroot.go:174] setting up certificates
	I1014 07:23:40.801974   13076 provision.go:84] configureAuth start
	I1014 07:23:40.801974   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:42.869101   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:42.869101   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:42.869101   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:45.382211   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:45.382688   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:45.382688   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:47.501182   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:47.501778   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:47.501832   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:50.008838   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:50.008838   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:50.009193   13076 provision.go:143] copyHostCerts
	I1014 07:23:50.009346   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1014 07:23:50.009346   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1014 07:23:50.009346   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1014 07:23:50.010015   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1014 07:23:50.011395   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1014 07:23:50.011967   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1014 07:23:50.011967   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1014 07:23:50.011967   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1014 07:23:50.013212   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1014 07:23:50.014109   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1014 07:23:50.014109   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1014 07:23:50.014507   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I1014 07:23:50.015649   13076 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-132600-m02 san=[127.0.0.1 172.20.111.83 ha-132600-m02 localhost minikube]
	I1014 07:23:50.130152   13076 provision.go:177] copyRemoteCerts
	I1014 07:23:50.140319   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 07:23:50.140319   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:52.242533   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:52.242533   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:52.243067   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:54.757999   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:54.758132   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:54.758132   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:23:54.867658   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7273333s)
	I1014 07:23:54.867658   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1014 07:23:54.867658   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 07:23:54.915126   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1014 07:23:54.915808   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 07:23:54.962775   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1014 07:23:54.963449   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 07:23:55.012733   13076 provision.go:87] duration metric: took 14.2107408s to configureAuth
	I1014 07:23:55.012733   13076 buildroot.go:189] setting minikube options for container-runtime
	I1014 07:23:55.013734   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:23:55.013734   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:57.174688   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:57.174688   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:57.175736   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:59.668037   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:59.668597   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:59.675074   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:59.675608   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:59.675873   13076 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1014 07:23:59.826626   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1014 07:23:59.826626   13076 buildroot.go:70] root file system type: tmpfs
	I1014 07:23:59.826926   13076 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1014 07:23:59.827038   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:01.963890   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:01.964568   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:01.964568   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:04.515125   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:04.515943   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:04.521824   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:04.522248   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:04.522248   13076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.20.108.120"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1014 07:24:04.691427   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.20.108.120
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1014 07:24:04.692090   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:06.819823   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:06.820339   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:06.820339   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:09.321630   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:09.321630   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:09.326748   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:09.327748   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:09.327808   13076 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1014 07:24:11.561493   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1014 07:24:11.561493   13076 machine.go:96] duration metric: took 44.993809s to provisionDockerMachine
	I1014 07:24:11.561493   13076 client.go:171] duration metric: took 1m54.3893961s to LocalClient.Create
	I1014 07:24:11.561493   13076 start.go:167] duration metric: took 1m54.3893961s to libmachine.API.Create "ha-132600"
	I1014 07:24:11.561493   13076 start.go:293] postStartSetup for "ha-132600-m02" (driver="hyperv")
	I1014 07:24:11.561493   13076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 07:24:11.572490   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 07:24:11.572490   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:13.652544   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:13.653401   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:13.653599   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:16.188638   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:16.188638   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:16.189304   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:24:16.298812   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7262867s)
	I1014 07:24:16.310231   13076 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 07:24:16.316927   13076 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 07:24:16.316927   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1014 07:24:16.317465   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1014 07:24:16.318548   13076 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> 9362.pem in /etc/ssl/certs
	I1014 07:24:16.318548   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /etc/ssl/certs/9362.pem
	I1014 07:24:16.332285   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 07:24:16.351469   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /etc/ssl/certs/9362.pem (1708 bytes)
	I1014 07:24:16.401820   13076 start.go:296] duration metric: took 4.8403207s for postStartSetup
	I1014 07:24:16.404096   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:18.500218   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:18.501234   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:18.501234   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:20.974982   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:20.975125   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:20.975125   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:24:20.978124   13076 start.go:128] duration metric: took 2m3.8086365s to createHost
	I1014 07:24:20.978209   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:23.112937   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:23.113001   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:23.113001   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:25.636675   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:25.637206   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:25.641953   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:25.642670   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:25.642670   13076 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 07:24:25.768945   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728915865.766702127
	
	I1014 07:24:25.769174   13076 fix.go:216] guest clock: 1728915865.766702127
	I1014 07:24:25.769174   13076 fix.go:229] Guest: 2024-10-14 07:24:25.766702127 -0700 PDT Remote: 2024-10-14 07:24:20.978124 -0700 PDT m=+320.730336401 (delta=4.788578127s)
	I1014 07:24:25.769174   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:27.845838   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:27.845838   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:27.845838   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:30.364175   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:30.364263   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:30.369382   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:30.369967   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:30.370049   13076 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1728915865
	I1014 07:24:30.508807   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 14 14:24:25 UTC 2024
	
	I1014 07:24:30.508807   13076 fix.go:236] clock set: Mon Oct 14 14:24:25 UTC 2024
	 (err=<nil>)
	I1014 07:24:30.508807   13076 start.go:83] releasing machines lock for "ha-132600-m02", held for 2m13.3394475s
	I1014 07:24:30.508807   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:32.570516   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:32.570516   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:32.570516   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:35.081338   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:35.081338   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:35.084330   13076 out.go:177] * Found network options:
	I1014 07:24:35.087197   13076 out.go:177]   - NO_PROXY=172.20.108.120
	W1014 07:24:35.089531   13076 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 07:24:35.091879   13076 out.go:177]   - NO_PROXY=172.20.108.120
	W1014 07:24:35.094409   13076 proxy.go:119] fail to check proxy env: Error ip not in block
	W1014 07:24:35.095622   13076 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 07:24:35.098960   13076 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1014 07:24:35.099124   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:35.109488   13076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1014 07:24:35.109488   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:37.293745   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:37.293818   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:37.293870   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:37.344951   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:37.344951   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:37.344951   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:39.933767   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:39.934139   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:39.934445   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:24:39.999458   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:39.999458   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:39.999458   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:24:40.039802   13076 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.930308s)
	W1014 07:24:40.039802   13076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 07:24:40.051889   13076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 07:24:40.057601   13076 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9585844s)
	W1014 07:24:40.057601   13076 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1014 07:24:40.090774   13076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 07:24:40.090774   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:24:40.091315   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:24:40.139757   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1014 07:24:40.172033   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1014 07:24:40.178804   13076 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1014 07:24:40.178804   13076 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1014 07:24:40.194040   13076 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 07:24:40.209169   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 07:24:40.241042   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:24:40.273436   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 07:24:40.304842   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:24:40.336705   13076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 07:24:40.371635   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 07:24:40.404378   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 07:24:40.435805   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 07:24:40.485319   13076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 07:24:40.505473   13076 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 07:24:40.516975   13076 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 07:24:40.550905   13076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 07:24:40.581632   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:24:40.785773   13076 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 07:24:40.819978   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:24:40.830992   13076 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1014 07:24:40.876801   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:24:40.913930   13076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 07:24:40.966444   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:24:41.006432   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:24:41.044531   13076 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 07:24:41.114617   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:24:41.139995   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:24:41.187779   13076 ssh_runner.go:195] Run: which cri-dockerd
	I1014 07:24:41.207367   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1014 07:24:41.226988   13076 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1014 07:24:41.272924   13076 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1014 07:24:41.470204   13076 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1014 07:24:41.650925   13076 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1014 07:24:41.650925   13076 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1014 07:24:41.694417   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:24:41.893295   13076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:25:43.008408   13076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.115034s)
	I1014 07:25:43.020423   13076 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1014 07:25:43.059257   13076 out.go:201] 
	W1014 07:25:43.062124   13076 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 14 14:24:09 ha-132600-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.938489944Z" level=info msg="Starting up"
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.941531731Z" level=info msg="containerd not running, starting managed containerd"
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.942638427Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	Oct 14 14:24:09 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:09.976195986Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.003978070Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004049170Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004119469Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004233069Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004318268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004457268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004664767Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004761167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004782067Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004795467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004887666Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.005331964Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008453752Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008545052Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008673551Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008759151Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008859951Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008988550Z" level=info msg="metadata content store policy set" policy=shared
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034095951Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034335550Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034407650Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034432250Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034449450Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034603849Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034945948Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035129847Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035263947Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035294447Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035312846Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035328546Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035343646Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035419846Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035443346Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035495946Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035526946Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035542746Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035565345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035583545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035598745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035630745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035648245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035663945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035689645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035709545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035731545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035750545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035764245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035777845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035792145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035809345Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035833944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035849444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035864444Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036025444Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036057644Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036074443Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036087843Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036099043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036115343Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036131843Z" level=info msg="NRI interface is disabled by configuration."
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036482842Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036737141Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036799541Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036822041Z" level=info msg="containerd successfully booted in 0.062283s"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.009541514Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.046546477Z" level=info msg="Loading containers: start."
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.205429391Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.426461627Z" level=info msg="Loading containers: done."
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448023414Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448125314Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448200613Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448511212Z" level=info msg="Daemon has completed initialization"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.556153647Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.556338446Z" level=info msg="API listen on [::]:2376"
	Oct 14 14:24:11 ha-132600-m02 systemd[1]: Started Docker Application Container Engine.
	Oct 14 14:24:41 ha-132600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.916201367Z" level=info msg="Processing signal 'terminated'"
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.918697964Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919058063Z" level=info msg="Daemon shutdown complete"
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919215663Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919252063Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: docker.service: Deactivated successfully.
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 14 14:24:42 ha-132600-m02 dockerd[1075]: time="2024-10-14T14:24:42.978494717Z" level=info msg="Starting up"
	Oct 14 14:25:43 ha-132600-m02 dockerd[1075]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1014 07:25:43.062124   13076 out.go:270] * 
	W1014 07:25:43.062827   13076 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:25:43.067027   13076 out.go:201] 
	
	
	==> Docker <==
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.333281844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.333727142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.437798771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.437992471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.438006871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.438388570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.508645787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.508712286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.508732386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.508834586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:22:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d7137fb8ec569089599a03e11d18eda509d210d29fa54b5e6a0a8d7dd7a54e7f/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 14:22:32 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:22:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/90ffeb0ce1aafcb639832f7144bd2dbb0b5c9deb53555b49e386b3d3e96d8bc3/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 14:22:32 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:22:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7278ecf7cc9466e1fee2170d699055f6498d2ece8da31c4b3696ed85b3cd38cf/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922383735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922463635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922481235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922580735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.996609949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.996807948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.996907948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.998275745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.069443031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.070022429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.070369527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.076841098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5a6196684fc6e       c69fa2e9cbf5f                                                                                       3 minutes ago       Running             coredns                   0                   7278ecf7cc946       coredns-7c65d6cfc9-4qfrq
	81d6fdac8115f       6e38f40d628db                                                                                       3 minutes ago       Running             storage-provisioner       0                   90ffeb0ce1aaf       storage-provisioner
	52c3e5370a6c8       c69fa2e9cbf5f                                                                                       3 minutes ago       Running             coredns                   0                   d7137fb8ec569       coredns-7c65d6cfc9-zf6cd
	dae2c0aa67af3       kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387            3 minutes ago       Running             kindnet-cni               0                   277e64b59b8aa       kindnet-rkjqr
	4745a4b0dc379       60c005f310ff3                                                                                       3 minutes ago       Running             kube-proxy                0                   15f605d31df67       kube-proxy-zkbj8
	b08f7a3c2a5a6       ghcr.io/kube-vip/kube-vip@sha256:9e23baad11ae3e69d739430b9fdb60df22356b7da4b4f4e458fae0541619deb4   4 minutes ago       Running             kube-vip                  0                   46cfd7b278d40       kube-vip-ha-132600
	84fd76d493d80       2e96e5913fc06                                                                                       4 minutes ago       Running             etcd                      0                   d71566d00e2f9       etcd-ha-132600
	b661cb6713103       6bab7719df100                                                                                       4 minutes ago       Running             kube-apiserver            0                   cb18be6165830       kube-apiserver-ha-132600
	35c870864a800       9aa1fad941575                                                                                       4 minutes ago       Running             kube-scheduler            0                   36593363b631a       kube-scheduler-ha-132600
	4a8cce31aa3a2       175ffd71cce3d                                                                                       4 minutes ago       Running             kube-controller-manager   0                   f5fa69fe68b07       kube-controller-manager-ha-132600
	
	
	==> coredns [52c3e5370a6c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41885 - 43638 "HINFO IN 5147527986633681541.7511835962423692488. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.048565283s
	
	
	==> coredns [5a6196684fc6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50464 - 22593 "HINFO IN 8090798371432773748.7872157757733337044. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.084180423s
	
	
	==> describe nodes <==
	Name:               ha-132600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-132600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=ha-132600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_14T07_22_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 14:22:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-132600
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:25:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 14:22:34 +0000   Mon, 14 Oct 2024 14:22:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 14:22:34 +0000   Mon, 14 Oct 2024 14:22:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 14:22:34 +0000   Mon, 14 Oct 2024 14:22:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 14:22:34 +0000   Mon, 14 Oct 2024 14:22:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.108.120
	  Hostname:    ha-132600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 07bcc58a0c7e4f0eaae96253dcf0a4ae
	  System UUID:                e06d00fc-5ebc-1a49-bf31-4c6ce500fe9c
	  Boot ID:                    215d765d-07e3-4bb7-ba3c-58ab142d5afa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-4qfrq             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     3m56s
	  kube-system                 coredns-7c65d6cfc9-zf6cd             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     3m56s
	  kube-system                 etcd-ha-132600                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m2s
	  kube-system                 kindnet-rkjqr                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m56s
	  kube-system                 kube-apiserver-ha-132600             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 kube-controller-manager-ha-132600    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 kube-proxy-zkbj8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 kube-scheduler-ha-132600             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 kube-vip-ha-132600                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m54s  kube-proxy       
	  Normal  Starting                 4m     kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m     kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m     kubelet          Node ha-132600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m     kubelet          Node ha-132600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m     kubelet          Node ha-132600 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m57s  node-controller  Node ha-132600 event: Registered Node ha-132600 in Controller
	  Normal  NodeReady                3m32s  kubelet          Node ha-132600 status is now: NodeReady
	
	
	==> dmesg <==
	[  +1.245902] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.245700] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.050916] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +45.956887] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.179434] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[Oct14 14:21] systemd-fstab-generator[997]: Ignoring "noauto" option for root device
	[  +0.095957] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.527367] systemd-fstab-generator[1037]: Ignoring "noauto" option for root device
	[  +0.185829] systemd-fstab-generator[1049]: Ignoring "noauto" option for root device
	[  +0.220100] systemd-fstab-generator[1063]: Ignoring "noauto" option for root device
	[  +2.802946] systemd-fstab-generator[1277]: Ignoring "noauto" option for root device
	[  +0.209648] systemd-fstab-generator[1289]: Ignoring "noauto" option for root device
	[  +0.207153] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.288223] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[ +11.411364] systemd-fstab-generator[1416]: Ignoring "noauto" option for root device
	[  +0.107418] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.752049] systemd-fstab-generator[1674]: Ignoring "noauto" option for root device
	[  +6.264384] systemd-fstab-generator[1823]: Ignoring "noauto" option for root device
	[  +0.102334] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.637673] kauditd_printk_skb: 67 callbacks suppressed
	[Oct14 14:22] systemd-fstab-generator[2317]: Ignoring "noauto" option for root device
	[  +5.137329] kauditd_printk_skb: 17 callbacks suppressed
	[  +8.104433] kauditd_printk_skb: 29 callbacks suppressed
	
	
	==> etcd [84fd76d493d8] <==
	{"level":"info","ts":"2024-10-14T14:21:55.734345Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1c0f11edc616a87e received MsgVoteResp from 1c0f11edc616a87e at term 2"}
	{"level":"info","ts":"2024-10-14T14:21:55.734468Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1c0f11edc616a87e became leader at term 2"}
	{"level":"info","ts":"2024-10-14T14:21:55.734558Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 1c0f11edc616a87e elected leader 1c0f11edc616a87e at term 2"}
	{"level":"info","ts":"2024-10-14T14:21:55.743944Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T14:21:55.753571Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"60f23df2979e3d4a","local-member-id":"1c0f11edc616a87e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T14:21:55.754180Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T14:21:55.755114Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T14:21:55.755455Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"1c0f11edc616a87e","local-member-attributes":"{Name:ha-132600 ClientURLs:[https://172.20.108.120:2379]}","request-path":"/0/members/1c0f11edc616a87e/attributes","cluster-id":"60f23df2979e3d4a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-14T14:21:55.755734Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T14:21:55.762346Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T14:21:55.764750Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-14T14:21:55.773938Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-14T14:21:55.771033Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T14:21:55.781509Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T14:21:55.790642Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.20.108.120:2379"}
	{"level":"info","ts":"2024-10-14T14:21:55.783509Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-14T14:22:07.879957Z","caller":"traceutil/trace.go:171","msg":"trace[597013235] transaction","detail":"{read_only:false; response_revision:340; number_of_response:1; }","duration":"128.695256ms","start":"2024-10-14T14:22:07.751244Z","end":"2024-10-14T14:22:07.879940Z","steps":["trace[597013235] 'process raft request'  (duration: 56.790725ms)","trace[597013235] 'compare'  (duration: 71.246831ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-14T14:22:07.880058Z","caller":"traceutil/trace.go:171","msg":"trace[2064377417] transaction","detail":"{read_only:false; response_revision:343; number_of_response:1; }","duration":"126.232155ms","start":"2024-10-14T14:22:07.753813Z","end":"2024-10-14T14:22:07.880045Z","steps":["trace[2064377417] 'process raft request'  (duration: 125.814455ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:22:07.880130Z","caller":"traceutil/trace.go:171","msg":"trace[152598700] transaction","detail":"{read_only:false; response_revision:341; number_of_response:1; }","duration":"128.322756ms","start":"2024-10-14T14:22:07.751800Z","end":"2024-10-14T14:22:07.880123Z","steps":["trace[152598700] 'process raft request'  (duration: 127.754056ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:22:07.880149Z","caller":"traceutil/trace.go:171","msg":"trace[1241907766] transaction","detail":"{read_only:false; response_revision:342; number_of_response:1; }","duration":"128.197556ms","start":"2024-10-14T14:22:07.751946Z","end":"2024-10-14T14:22:07.880144Z","steps":["trace[1241907766] 'process raft request'  (duration: 127.649056ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:22:13.374435Z","caller":"traceutil/trace.go:171","msg":"trace[979670185] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"443.350864ms","start":"2024-10-14T14:22:12.931064Z","end":"2024-10-14T14:22:13.374415Z","steps":["trace[979670185] 'process raft request'  (duration: 443.018064ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T14:22:13.376146Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-14T14:22:12.931047Z","time spent":"444.536163ms","remote":"127.0.0.1:55470","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5580,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-ha-132600\" mod_revision:367 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-ha-132600\" value_size:5531 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-ha-132600\" > >"}
	{"level":"info","ts":"2024-10-14T14:22:28.783322Z","caller":"traceutil/trace.go:171","msg":"trace[1840750524] transaction","detail":"{read_only:false; response_revision:403; number_of_response:1; }","duration":"157.257125ms","start":"2024-10-14T14:22:28.626047Z","end":"2024-10-14T14:22:28.783304Z","steps":["trace[1840750524] 'process raft request'  (duration: 157.088326ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:22:29.878049Z","caller":"traceutil/trace.go:171","msg":"trace[630262405] transaction","detail":"{read_only:false; response_revision:404; number_of_response:1; }","duration":"108.586835ms","start":"2024-10-14T14:22:29.769442Z","end":"2024-10-14T14:22:29.878029Z","steps":["trace[630262405] 'process raft request'  (duration: 107.997536ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:23:24.077614Z","caller":"traceutil/trace.go:171","msg":"trace[844030949] transaction","detail":"{read_only:false; response_revision:541; number_of_response:1; }","duration":"108.091259ms","start":"2024-10-14T14:23:23.969504Z","end":"2024-10-14T14:23:24.077595Z","steps":["trace[844030949] 'process raft request'  (duration: 107.87726ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:26:03 up 6 min,  0 users,  load average: 0.19, 0.46, 0.25
	Linux ha-132600 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [dae2c0aa67af] <==
	I1014 14:23:57.571698       1 main.go:300] handling current node
	I1014 14:24:07.571984       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:24:07.572036       1 main.go:300] handling current node
	I1014 14:24:17.563336       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:24:17.563377       1 main.go:300] handling current node
	I1014 14:24:27.572231       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:24:27.572292       1 main.go:300] handling current node
	I1014 14:24:37.571608       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:24:37.571762       1 main.go:300] handling current node
	I1014 14:24:47.563831       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:24:47.563909       1 main.go:300] handling current node
	I1014 14:24:57.563556       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:24:57.563623       1 main.go:300] handling current node
	I1014 14:25:07.563086       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:25:07.563216       1 main.go:300] handling current node
	I1014 14:25:17.563379       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:25:17.563427       1 main.go:300] handling current node
	I1014 14:25:27.570416       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:25:27.570540       1 main.go:300] handling current node
	I1014 14:25:37.564996       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:25:37.565044       1 main.go:300] handling current node
	I1014 14:25:47.563444       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:25:47.563514       1 main.go:300] handling current node
	I1014 14:25:57.564393       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:25:57.564604       1 main.go:300] handling current node
	
	
	==> kube-apiserver [b661cb671310] <==
	I1014 14:21:58.916981       1 cache.go:39] Caches are synced for autoregister controller
	I1014 14:21:58.913031       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1014 14:21:58.913049       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1014 14:21:58.913061       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1014 14:21:58.919034       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1014 14:21:58.925388       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1014 14:21:58.925426       1 policy_source.go:224] refreshing policies
	I1014 14:21:58.928296       1 controller.go:615] quota admission added evaluator for: namespaces
	E1014 14:21:58.950810       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1014 14:21:59.174958       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 14:21:59.819127       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1014 14:21:59.829797       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1014 14:21:59.829835       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 14:22:00.942976       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 14:22:01.015766       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 14:22:01.150423       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1014 14:22:01.164901       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.20.108.120]
	I1014 14:22:01.165818       1 controller.go:615] quota admission added evaluator for: endpoints
	I1014 14:22:01.175247       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 14:22:01.835583       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1014 14:22:03.554010       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1014 14:22:03.589407       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1014 14:22:03.617368       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1014 14:22:07.346752       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1014 14:22:07.516653       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [4a8cce31aa3a] <==
	I1014 14:22:06.797898       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 14:22:06.831102       1 shared_informer.go:320] Caches are synced for taint
	I1014 14:22:06.831728       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1014 14:22:06.832169       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-132600"
	I1014 14:22:06.832235       1 node_lifecycle_controller.go:1036] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1014 14:22:07.228194       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 14:22:07.228219       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1014 14:22:07.238830       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 14:22:07.890284       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="531.70123ms"
	I1014 14:22:07.963572       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="72.664531ms"
	I1014 14:22:07.964409       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="234.3µs"
	I1014 14:22:31.675332       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600"
	I1014 14:22:31.702335       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600"
	I1014 14:22:31.722232       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="290.099µs"
	I1014 14:22:31.739820       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="1.478096ms"
	I1014 14:22:31.765093       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="242.4µs"
	I1014 14:22:31.815158       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="67µs"
	I1014 14:22:31.836955       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1014 14:22:33.199416       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="65.9µs"
	I1014 14:22:34.284134       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="108.4µs"
	I1014 14:22:34.352001       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="27.147579ms"
	I1014 14:22:34.400543       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="48.068986ms"
	I1014 14:22:34.400805       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="105.799µs"
	I1014 14:22:34.401110       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="152.899µs"
	I1014 14:22:34.558208       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600"
	
	
	==> kube-proxy [4745a4b0dc37] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1014 14:22:09.024513       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1014 14:22:09.047174       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.20.108.120"]
	E1014 14:22:09.047308       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 14:22:09.124346       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 14:22:09.124494       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 14:22:09.124529       1 server_linux.go:169] "Using iptables Proxier"
	I1014 14:22:09.128809       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 14:22:09.129721       1 server.go:483] "Version info" version="v1.31.1"
	I1014 14:22:09.129805       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 14:22:09.131652       1 config.go:199] "Starting service config controller"
	I1014 14:22:09.131988       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 14:22:09.132269       1 config.go:105] "Starting endpoint slice config controller"
	I1014 14:22:09.132619       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 14:22:09.133592       1 config.go:328] "Starting node config controller"
	I1014 14:22:09.135184       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 14:22:09.232621       1 shared_informer.go:320] Caches are synced for service config
	I1014 14:22:09.233933       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 14:22:09.236054       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [35c870864a80] <==
	W1014 14:21:59.944424       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1014 14:21:59.944452       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:21:59.979078       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1014 14:21:59.979365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.111250       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1014 14:22:00.111664       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.192015       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1014 14:22:00.192346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.196984       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1014 14:22:00.197112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.197208       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1014 14:22:00.197241       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.212172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1014 14:22:00.212214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.260144       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1014 14:22:00.261002       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.276235       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1014 14:22:00.276419       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.295031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1014 14:22:00.295892       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.311664       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1014 14:22:00.313212       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1014 14:22:00.354351       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1014 14:22:00.355204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 14:22:02.117335       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 14 14:22:31 ha-132600 kubelet[2324]: I1014 14:22:31.717029    2324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-rkjqr" podStartSLOduration=17.785417165 podStartE2EDuration="24.717002635s" podCreationTimestamp="2024-10-14 14:22:07 +0000 UTC" firstStartedPulling="2024-10-14 14:22:08.823820226 +0000 UTC m=+5.394539568" lastFinishedPulling="2024-10-14 14:22:15.755405696 +0000 UTC m=+12.326125038" observedRunningTime="2024-10-14 14:22:16.993102098 +0000 UTC m=+13.563821540" watchObservedRunningTime="2024-10-14 14:22:31.717002635 +0000 UTC m=+28.287721977"
	Oct 14 14:22:31 ha-132600 kubelet[2324]: I1014 14:22:31.771406    2324 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fbbeb2c-d015-4b70-9a7b-57f155b48f1c-config-volume\") pod \"coredns-7c65d6cfc9-4qfrq\" (UID: \"0fbbeb2c-d015-4b70-9a7b-57f155b48f1c\") " pod="kube-system/coredns-7c65d6cfc9-4qfrq"
	Oct 14 14:22:31 ha-132600 kubelet[2324]: I1014 14:22:31.771471    2324 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97022a78-ed1d-4bdb-a9c2-53a0c3fb2392-config-volume\") pod \"coredns-7c65d6cfc9-zf6cd\" (UID: \"97022a78-ed1d-4bdb-a9c2-53a0c3fb2392\") " pod="kube-system/coredns-7c65d6cfc9-zf6cd"
	Oct 14 14:22:31 ha-132600 kubelet[2324]: I1014 14:22:31.771501    2324 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwn2z\" (UniqueName: \"kubernetes.io/projected/0fbbeb2c-d015-4b70-9a7b-57f155b48f1c-kube-api-access-pwn2z\") pod \"coredns-7c65d6cfc9-4qfrq\" (UID: \"0fbbeb2c-d015-4b70-9a7b-57f155b48f1c\") " pod="kube-system/coredns-7c65d6cfc9-4qfrq"
	Oct 14 14:22:31 ha-132600 kubelet[2324]: I1014 14:22:31.771567    2324 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbqgt\" (UniqueName: \"kubernetes.io/projected/97022a78-ed1d-4bdb-a9c2-53a0c3fb2392-kube-api-access-hbqgt\") pod \"coredns-7c65d6cfc9-zf6cd\" (UID: \"97022a78-ed1d-4bdb-a9c2-53a0c3fb2392\") " pod="kube-system/coredns-7c65d6cfc9-zf6cd"
	Oct 14 14:22:31 ha-132600 kubelet[2324]: I1014 14:22:31.872245    2324 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cc604a80-e2aa-4b2b-9608-2f03a7bf96a8-tmp\") pod \"storage-provisioner\" (UID: \"cc604a80-e2aa-4b2b-9608-2f03a7bf96a8\") " pod="kube-system/storage-provisioner"
	Oct 14 14:22:31 ha-132600 kubelet[2324]: I1014 14:22:31.872354    2324 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4qsh\" (UniqueName: \"kubernetes.io/projected/cc604a80-e2aa-4b2b-9608-2f03a7bf96a8-kube-api-access-r4qsh\") pod \"storage-provisioner\" (UID: \"cc604a80-e2aa-4b2b-9608-2f03a7bf96a8\") " pod="kube-system/storage-provisioner"
	Oct 14 14:22:33 ha-132600 kubelet[2324]: I1014 14:22:33.200776    2324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-zf6cd" podStartSLOduration=26.200756043 podStartE2EDuration="26.200756043s" podCreationTimestamp="2024-10-14 14:22:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-14 14:22:33.200380745 +0000 UTC m=+29.771100087" watchObservedRunningTime="2024-10-14 14:22:33.200756043 +0000 UTC m=+29.771475385"
	Oct 14 14:22:34 ha-132600 kubelet[2324]: I1014 14:22:34.281076    2324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-4qfrq" podStartSLOduration=27.281057108 podStartE2EDuration="27.281057108s" podCreationTimestamp="2024-10-14 14:22:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-14 14:22:34.279678214 +0000 UTC m=+30.850397556" watchObservedRunningTime="2024-10-14 14:22:34.281057108 +0000 UTC m=+30.851776550"
	Oct 14 14:22:34 ha-132600 kubelet[2324]: I1014 14:22:34.281222    2324 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=19.281212907 podStartE2EDuration="19.281212907s" podCreationTimestamp="2024-10-14 14:22:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-14 14:22:33.264570557 +0000 UTC m=+29.835289999" watchObservedRunningTime="2024-10-14 14:22:34.281212907 +0000 UTC m=+30.851932249"
	Oct 14 14:23:03 ha-132600 kubelet[2324]: E1014 14:23:03.676810    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:23:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:23:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:23:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:23:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:24:03 ha-132600 kubelet[2324]: E1014 14:24:03.676383    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:24:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:24:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:24:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:24:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:25:03 ha-132600 kubelet[2324]: E1014 14:25:03.675008    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:25:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:25:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:25:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:25:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [81d6fdac8115] <==
	I1014 14:22:33.432711       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 14:22:33.504323       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 14:22:33.505471       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1014 14:22:33.522254       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 14:22:33.522619       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-132600_6ba40601-1895-44e0-aab6-02c0536a0b9c!
	I1014 14:22:33.527769       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5c7ed5ed-4913-4d2f-8634-767d8aa0727d", APIVersion:"v1", ResourceVersion:"436", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-132600_6ba40601-1895-44e0-aab6-02c0536a0b9c became leader
	I1014 14:22:33.636551       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-132600_6ba40601-1895-44e0-aab6-02c0536a0b9c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-132600 -n ha-132600
E1014 07:26:05.090365     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-132600 -n ha-132600: (11.9382943s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-132600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StartCluster (436.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (742.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-132600 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-132600 -- rollout status deployment/busybox
E1014 07:29:10.844374     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 07:30:37.375872     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 07:32:13.923056     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 07:34:10.844846     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 07:35:37.375640     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:133: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-132600 -- rollout status deployment/busybox: exit status 1 (10m3.9703982s)

                                                
                                                
-- stdout --
	Waiting for deployment "busybox" rollout to finish: 0 of 3 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 1 of 3 updated replicas are available...

                                                
                                                
-- /stdout --
** stderr ** 
	error: deployment "busybox" exceeded its progress deadline

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-132600 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I1014 07:36:21.444222     936 retry.go:31] will retry after 1.395563955s: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-132600 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I1014 07:36:23.179275     936 retry.go:31] will retry after 1.839499864s: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-132600 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I1014 07:36:25.370800     936 retry.go:31] will retry after 1.490131504s: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-132600 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I1014 07:36:27.223247     936 retry.go:31] will retry after 3.839458424s: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-132600 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I1014 07:36:31.401817     936 retry.go:31] will retry after 3.004367428s: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-132600 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I1014 07:36:34.757708     936 retry.go:31] will retry after 6.512911121s: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-132600 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I1014 07:36:41.610248     936 retry.go:31] will retry after 12.26347137s: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-132600 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I1014 07:36:54.233913     936 retry.go:31] will retry after 11.815774481s: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
E1014 07:37:00.454317     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-132600 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I1014 07:37:06.401126     936 retry.go:31] will retry after 33.303940165s: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-132600 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
I1014 07:37:40.045633     936 retry.go:31] will retry after 20.332638455s: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-132600 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:159: failed to resolve pod IPs: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-132600 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-132600 -- exec busybox-7dff88458-8thz6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-132600 -- exec busybox-7dff88458-8thz6 -- nslookup kubernetes.io: exit status 1 (357.2095ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-7dff88458-8thz6 does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:173: Pod busybox-7dff88458-8thz6 could not resolve 'kubernetes.io': exit status 1
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-132600 -- exec busybox-7dff88458-kr92j -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-132600 -- exec busybox-7dff88458-kr92j -- nslookup kubernetes.io: (1.8367704s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-132600 -- exec busybox-7dff88458-rng7p -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-132600 -- exec busybox-7dff88458-rng7p -- nslookup kubernetes.io: exit status 1 (361.6812ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-7dff88458-rng7p does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:173: Pod busybox-7dff88458-rng7p could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-132600 -- exec busybox-7dff88458-8thz6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-132600 -- exec busybox-7dff88458-8thz6 -- nslookup kubernetes.default: exit status 1 (333.9916ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-7dff88458-8thz6 does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:183: Pod busybox-7dff88458-8thz6 could not resolve 'kubernetes.default': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-132600 -- exec busybox-7dff88458-kr92j -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-132600 -- exec busybox-7dff88458-rng7p -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-132600 -- exec busybox-7dff88458-rng7p -- nslookup kubernetes.default: exit status 1 (358.0676ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-7dff88458-rng7p does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:183: Pod busybox-7dff88458-rng7p could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-132600 -- exec busybox-7dff88458-8thz6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-132600 -- exec busybox-7dff88458-8thz6 -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (332.0304ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-7dff88458-8thz6 does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:191: Pod busybox-7dff88458-8thz6 could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-132600 -- exec busybox-7dff88458-kr92j -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-132600 -- exec busybox-7dff88458-rng7p -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-132600 -- exec busybox-7dff88458-rng7p -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (353.0322ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-7dff88458-rng7p does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:191: Pod busybox-7dff88458-rng7p could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-132600 -n ha-132600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-132600 -n ha-132600: (12.1634593s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-132600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-132600 logs -n 25: (8.2182013s)
helpers_test.go:252: TestMultiControlPlane/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| delete  | -p functional-572000                 | functional-572000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:17 PDT | 14 Oct 24 07:19 PDT |
	| start   | -p ha-132600 --wait=true             | ha-132600         | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:19 PDT |                     |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- apply -f             | ha-132600         | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:26 PDT | 14 Oct 24 07:26 PDT |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- rollout status       | ha-132600         | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:26 PDT |                     |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600         | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600         | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600         | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600         | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600         | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600         | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600         | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600         | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600         | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:37 PDT | 14 Oct 24 07:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600         | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:37 PDT | 14 Oct 24 07:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600         | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600         | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600         | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-8thz6 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600         | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | busybox-7dff88458-kr92j --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600         | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-rng7p --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600         | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-8thz6 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600         | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | busybox-7dff88458-kr92j --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600         | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-rng7p --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600         | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-8thz6 -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600         | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | busybox-7dff88458-kr92j -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600         | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-rng7p -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 07:19:00
	Running on machine: minikube1
	Binary: Built with gc go1.23.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 07:19:00.342865   13076 out.go:345] Setting OutFile to fd 1228 ...
	I1014 07:19:00.344546   13076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:19:00.344546   13076 out.go:358] Setting ErrFile to fd 824...
	I1014 07:19:00.344546   13076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:19:00.368247   13076 out.go:352] Setting JSON to false
	I1014 07:19:00.372277   13076 start.go:129] hostinfo: {"hostname":"minikube1","uptime":101054,"bootTime":1728814485,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5011 Build 19045.5011","kernelVersion":"10.0.19045.5011 Build 19045.5011","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1014 07:19:00.372803   13076 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:19:00.379728   13076 out.go:177] * [ha-132600] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	I1014 07:19:00.386820   13076 notify.go:220] Checking for updates...
	I1014 07:19:00.389571   13076 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:19:00.392679   13076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:19:00.395167   13076 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1014 07:19:00.398285   13076 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:19:00.400520   13076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:19:00.404118   13076 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:19:05.654471   13076 out.go:177] * Using the hyperv driver based on user configuration
	I1014 07:19:05.658285   13076 start.go:297] selected driver: hyperv
	I1014 07:19:05.658285   13076 start.go:901] validating driver "hyperv" against <nil>
	I1014 07:19:05.658285   13076 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:19:05.705211   13076 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 07:19:05.706541   13076 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:19:05.706541   13076 cni.go:84] Creating CNI manager for ""
	I1014 07:19:05.706541   13076 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1014 07:19:05.706541   13076 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 07:19:05.707066   13076 start.go:340] cluster config:
	{Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:19:05.707207   13076 iso.go:125] acquiring lock: {Name:mkb71635bcbccba29dce9048ad4cc71430a7e577 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:19:05.711264   13076 out.go:177] * Starting "ha-132600" primary control-plane node in "ha-132600" cluster
	I1014 07:19:05.715097   13076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:19:05.715262   13076 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1014 07:19:05.715344   13076 cache.go:56] Caching tarball of preloaded images
	I1014 07:19:05.715452   13076 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1014 07:19:05.715452   13076 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:19:05.716553   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:19:05.716898   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json: {Name:mkbd11bf3f90adebf1f4630d2e8deaac328748d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:19:05.717886   13076 start.go:360] acquireMachinesLock for ha-132600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:19:05.717886   13076 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-132600"
	I1014 07:19:05.718562   13076 start.go:93] Provisioning new machine with config: &{Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:19:05.718562   13076 start.go:125] createHost starting for "" (driver="hyperv")
	I1014 07:19:05.721891   13076 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:19:05.722555   13076 start.go:159] libmachine.API.Create for "ha-132600" (driver="hyperv")
	I1014 07:19:05.722555   13076 client.go:168] LocalClient.Create starting
	I1014 07:19:05.722555   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I1014 07:19:05.723316   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:19:05.723316   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:19:05.723316   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I1014 07:19:05.723877   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:19:05.723955   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:19:05.723955   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1014 07:19:07.752927   13076 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1014 07:19:07.753576   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:07.753793   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1014 07:19:09.445838   13076 main.go:141] libmachine: [stdout =====>] : False
	
	I1014 07:19:09.445838   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:09.446315   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:19:10.901034   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:19:10.901034   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:10.901034   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:19:14.320078   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:19:14.320309   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:14.323253   13076 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 07:19:14.829443   13076 main.go:141] libmachine: Creating SSH key...
	I1014 07:19:14.988345   13076 main.go:141] libmachine: Creating VM...
	I1014 07:19:14.988345   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:19:17.790932   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:19:17.791005   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:17.791145   13076 main.go:141] libmachine: Using switch "Default Switch"
	I1014 07:19:17.791145   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:19:19.529861   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:19:19.529861   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:19.529861   13076 main.go:141] libmachine: Creating VHD
	I1014 07:19:19.530436   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\fixed.vhd' -SizeBytes 10MB -Fixed
	I1014 07:19:23.105904   13076 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 267BC11D-A195-4636-8AE9-EF1D7966C95C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1014 07:19:23.105987   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:23.106037   13076 main.go:141] libmachine: Writing magic tar header
	I1014 07:19:23.106037   13076 main.go:141] libmachine: Writing SSH key tar header
	I1014 07:19:23.118040   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\disk.vhd' -VHDType Dynamic -DeleteSource
	I1014 07:19:26.201609   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:26.201950   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:26.202023   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\disk.vhd' -SizeBytes 20000MB
	I1014 07:19:28.768368   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:28.768789   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:28.768969   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-132600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1014 07:19:32.226690   13076 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-132600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1014 07:19:32.226915   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:32.226915   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-132600 -DynamicMemoryEnabled $false
	I1014 07:19:34.390731   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:34.390837   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:34.391049   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-132600 -Count 2
	I1014 07:19:36.449382   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:36.449628   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:36.449628   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-132600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\boot2docker.iso'
	I1014 07:19:38.932909   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:38.932909   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:38.932909   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-132600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\disk.vhd'
	I1014 07:19:41.509229   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:41.509229   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:41.509229   13076 main.go:141] libmachine: Starting VM...
	I1014 07:19:41.509229   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-132600
	I1014 07:19:44.718680   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:44.718680   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:44.719475   13076 main.go:141] libmachine: Waiting for host to start...
	I1014 07:19:44.719770   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:19:46.954823   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:19:46.954823   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:46.955829   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:19:49.408227   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:49.408227   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:50.409369   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:19:52.561607   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:19:52.561912   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:52.561912   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:19:55.081136   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:55.081187   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:56.082401   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:19:58.271497   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:19:58.271497   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:58.272384   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:00.740863   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:20:00.741070   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:01.741536   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:03.902654   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:03.902721   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:03.902721   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:06.378064   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:20:06.378064   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:07.379104   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:09.537329   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:09.537329   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:09.537541   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:12.074320   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:12.074320   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:12.074834   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:14.171613   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:14.171613   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:14.171613   13076 machine.go:93] provisionDockerMachine start ...
	I1014 07:20:14.172377   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:16.225265   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:16.225265   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:16.225634   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:18.692061   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:18.692061   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:18.698261   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:18.713495   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:18.713568   13076 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 07:20:18.839311   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 07:20:18.839517   13076 buildroot.go:166] provisioning hostname "ha-132600"
	I1014 07:20:18.839517   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:20.911254   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:20.911424   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:20.911601   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:23.370138   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:23.370756   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:23.377008   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:23.377773   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:23.377773   13076 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-132600 && echo "ha-132600" | sudo tee /etc/hostname
	I1014 07:20:23.537548   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-132600
	
	I1014 07:20:23.537548   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:25.571429   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:25.571429   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:25.572326   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:28.068677   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:28.068677   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:28.077026   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:28.078615   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:28.078615   13076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-132600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-132600/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-132600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 07:20:28.216809   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 07:20:28.216874   13076 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1014 07:20:28.216990   13076 buildroot.go:174] setting up certificates
	I1014 07:20:28.217016   13076 provision.go:84] configureAuth start
	I1014 07:20:28.217047   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:30.246758   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:30.247093   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:30.247368   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:32.688428   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:32.688428   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:32.688428   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:34.748908   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:34.748908   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:34.749349   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:37.221628   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:37.221798   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:37.221798   13076 provision.go:143] copyHostCerts
	I1014 07:20:37.221998   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1014 07:20:37.222444   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1014 07:20:37.222558   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1014 07:20:37.223110   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1014 07:20:37.224109   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1014 07:20:37.224599   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1014 07:20:37.224599   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1014 07:20:37.224959   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I1014 07:20:37.225332   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1014 07:20:37.225332   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1014 07:20:37.225332   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1014 07:20:37.226765   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1014 07:20:37.227733   13076 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-132600 san=[127.0.0.1 172.20.108.120 ha-132600 localhost minikube]
	I1014 07:20:37.731674   13076 provision.go:177] copyRemoteCerts
	I1014 07:20:37.744706   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 07:20:37.744706   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:39.801309   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:39.801309   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:39.801309   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:42.273306   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:42.273367   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:42.273367   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:20:42.378794   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6340227s)
	I1014 07:20:42.378847   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1014 07:20:42.378847   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I1014 07:20:42.424772   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1014 07:20:42.425289   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 07:20:42.471397   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1014 07:20:42.471964   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 07:20:42.527104   13076 provision.go:87] duration metric: took 14.3100395s to configureAuth
	I1014 07:20:42.527104   13076 buildroot.go:189] setting minikube options for container-runtime
	I1014 07:20:42.527712   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:20:42.528252   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:44.598308   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:44.598308   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:44.599305   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:47.040424   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:47.040637   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:47.045726   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:47.046398   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:47.046398   13076 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1014 07:20:47.167710   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1014 07:20:47.167801   13076 buildroot.go:70] root file system type: tmpfs
	I1014 07:20:47.168208   13076 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1014 07:20:47.168315   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:49.202111   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:49.202331   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:49.202424   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:51.648278   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:51.648278   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:51.654248   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:51.655052   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:51.655052   13076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1014 07:20:51.814048   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1014 07:20:51.814229   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:53.871643   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:53.871643   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:53.872030   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:56.286054   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:56.286054   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:56.292357   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:56.293125   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:56.293125   13076 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1014 07:20:58.458028   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1014 07:20:58.458028   13076 machine.go:96] duration metric: took 44.286361s to provisionDockerMachine
	I1014 07:20:58.458028   13076 client.go:171] duration metric: took 1m52.7353371s to LocalClient.Create
	I1014 07:20:58.458028   13076 start.go:167] duration metric: took 1m52.7353371s to libmachine.API.Create "ha-132600"
	I1014 07:20:58.458028   13076 start.go:293] postStartSetup for "ha-132600" (driver="hyperv")
	I1014 07:20:58.458028   13076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 07:20:58.471262   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 07:20:58.471262   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:00.512590   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:00.513162   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:00.513321   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:02.924795   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:02.924929   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:02.925163   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:21:03.032582   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5613141s)
	I1014 07:21:03.044373   13076 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 07:21:03.050831   13076 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 07:21:03.050831   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1014 07:21:03.051271   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1014 07:21:03.052198   13076 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> 9362.pem in /etc/ssl/certs
	I1014 07:21:03.052295   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /etc/ssl/certs/9362.pem
	I1014 07:21:03.063873   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 07:21:03.081968   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /etc/ssl/certs/9362.pem (1708 bytes)
	I1014 07:21:03.128637   13076 start.go:296] duration metric: took 4.6706031s for postStartSetup
	I1014 07:21:03.132031   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:05.196102   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:05.196622   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:05.196696   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:07.624940   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:07.625781   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:07.626103   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:21:07.629046   13076 start.go:128] duration metric: took 2m1.9103356s to createHost
	I1014 07:21:07.629046   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:09.687162   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:09.687420   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:09.687490   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:12.104556   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:12.104556   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:12.110165   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:21:12.110572   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:21:12.110572   13076 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 07:21:12.241524   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728915672.238854784
	
	I1014 07:21:12.241655   13076 fix.go:216] guest clock: 1728915672.238854784
	I1014 07:21:12.241726   13076 fix.go:229] Guest: 2024-10-14 07:21:12.238854784 -0700 PDT Remote: 2024-10-14 07:21:07.6290467 -0700 PDT m=+127.381500801 (delta=4.609808084s)
	I1014 07:21:12.241841   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:14.299338   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:14.299599   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:14.299599   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:16.767477   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:16.767665   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:16.775565   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:21:16.775565   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:21:16.775565   13076 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1728915672
	I1014 07:21:16.921038   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 14 14:21:12 UTC 2024
	
	I1014 07:21:16.921096   13076 fix.go:236] clock set: Mon Oct 14 14:21:12 UTC 2024
	 (err=<nil>)
	I1014 07:21:16.921169   13076 start.go:83] releasing machines lock for "ha-132600", held for 2m11.2030498s
	I1014 07:21:16.921319   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:18.988904   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:18.989898   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:18.989947   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:21.410167   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:21.410403   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:21.414482   13076 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1014 07:21:21.414683   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:21.423844   13076 ssh_runner.go:195] Run: cat /version.json
	I1014 07:21:21.423844   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:23.554001   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:23.554206   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:23.554362   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:23.565882   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:23.565882   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:23.565882   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:26.092249   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:26.092249   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:26.092379   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:21:26.119019   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:26.119086   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:26.119215   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:21:26.179573   13076 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.764971s)
	W1014 07:21:26.179670   13076 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1014 07:21:26.212879   13076 ssh_runner.go:235] Completed: cat /version.json: (4.7890281s)
	I1014 07:21:26.223815   13076 ssh_runner.go:195] Run: systemctl --version
	I1014 07:21:26.243251   13076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 07:21:26.251321   13076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 07:21:26.262431   13076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 07:21:26.290027   13076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 07:21:26.290027   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:21:26.290027   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1014 07:21:26.296743   13076 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1014 07:21:26.296743   13076 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1014 07:21:26.340326   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1014 07:21:26.370362   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1014 07:21:26.391878   13076 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 07:21:26.402847   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 07:21:26.433241   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:21:26.462619   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 07:21:26.491735   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:21:26.520404   13076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 07:21:26.550615   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 07:21:26.580278   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 07:21:26.608564   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 07:21:26.636849   13076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 07:21:26.656448   13076 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 07:21:26.667504   13076 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 07:21:26.700184   13076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 07:21:26.726834   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:26.916163   13076 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 07:21:26.953200   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:21:26.967627   13076 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1014 07:21:27.005080   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:21:27.039193   13076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 07:21:27.083701   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:21:27.118670   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:21:27.153508   13076 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 07:21:27.209689   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:21:27.230765   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:21:27.273213   13076 ssh_runner.go:195] Run: which cri-dockerd
	I1014 07:21:27.291783   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1014 07:21:27.308581   13076 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1014 07:21:27.351709   13076 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1014 07:21:27.539571   13076 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1014 07:21:27.725567   13076 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1014 07:21:27.725567   13076 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1014 07:21:27.768723   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:27.951142   13076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:21:30.486027   13076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5347969s)
	I1014 07:21:30.497722   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1014 07:21:30.531680   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 07:21:30.563742   13076 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1014 07:21:30.767218   13076 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1014 07:21:30.967093   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:31.191192   13076 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1014 07:21:31.229757   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 07:21:31.267426   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:31.462408   13076 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1014 07:21:31.569509   13076 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1014 07:21:31.581151   13076 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1014 07:21:31.591322   13076 start.go:563] Will wait 60s for crictl version
	I1014 07:21:31.604845   13076 ssh_runner.go:195] Run: which crictl
	I1014 07:21:31.622874   13076 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 07:21:31.680621   13076 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1014 07:21:31.689423   13076 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 07:21:31.732594   13076 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 07:21:31.766375   13076 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1014 07:21:31.766375   13076 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:08:65:4d Flags:up|broadcast|multicast|running}
	I1014 07:21:31.773955   13076 ip.go:214] interface addr: fe80::b548:ff79:d464:75b8/64
	I1014 07:21:31.773955   13076 ip.go:214] interface addr: 172.20.96.1/20
	I1014 07:21:31.784171   13076 ssh_runner.go:195] Run: grep 172.20.96.1	host.minikube.internal$ /etc/hosts
	I1014 07:21:31.791192   13076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 07:21:31.825649   13076 kubeadm.go:883] updating cluster {Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 07:21:31.825649   13076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:21:31.834667   13076 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 07:21:31.867707   13076 docker.go:689] Got preloaded images: 
	I1014 07:21:31.867707   13076 docker.go:695] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I1014 07:21:31.880238   13076 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1014 07:21:31.908893   13076 ssh_runner.go:195] Run: which lz4
	I1014 07:21:31.914374   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1014 07:21:31.924715   13076 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 07:21:31.930846   13076 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 07:21:31.931075   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I1014 07:21:33.829320   13076 docker.go:653] duration metric: took 1.9148657s to copy over tarball
	I1014 07:21:33.840382   13076 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 07:21:42.539043   13076 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6986506s)
	I1014 07:21:42.539174   13076 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 07:21:42.611065   13076 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1014 07:21:42.636639   13076 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I1014 07:21:42.681041   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:42.879066   13076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:21:46.161713   13076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.2824935s)
	I1014 07:21:46.173131   13076 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 07:21:46.198645   13076 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1014 07:21:46.198645   13076 cache_images.go:84] Images are preloaded, skipping loading
	I1014 07:21:46.198645   13076 kubeadm.go:934] updating node { 172.20.108.120 8443 v1.31.1 docker true true} ...
	I1014 07:21:46.198645   13076 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-132600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.108.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 07:21:46.209892   13076 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1014 07:21:46.283733   13076 cni.go:84] Creating CNI manager for ""
	I1014 07:21:46.283861   13076 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 07:21:46.283932   13076 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 07:21:46.284000   13076 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.108.120 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-132600 NodeName:ha-132600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.108.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.108.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 07:21:46.284290   13076 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.108.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-132600"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.20.108.120"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.108.120"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 07:21:46.284367   13076 kube-vip.go:115] generating kube-vip config ...
	I1014 07:21:46.295693   13076 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1014 07:21:46.324017   13076 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1014 07:21:46.324230   13076 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.20.111.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1014 07:21:46.334184   13076 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 07:21:46.349321   13076 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 07:21:46.360771   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 07:21:46.379224   13076 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I1014 07:21:46.412080   13076 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 07:21:46.441691   13076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1014 07:21:46.479025   13076 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1014 07:21:46.519461   13076 ssh_runner.go:195] Run: grep 172.20.111.254	control-plane.minikube.internal$ /etc/hosts
	I1014 07:21:46.525021   13076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.111.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 07:21:46.554943   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:46.730466   13076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 07:21:46.761153   13076 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600 for IP: 172.20.108.120
	I1014 07:21:46.761153   13076 certs.go:194] generating shared ca certs ...
	I1014 07:21:46.761153   13076 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:46.761825   13076 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I1014 07:21:46.762680   13076 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I1014 07:21:46.762846   13076 certs.go:256] generating profile certs ...
	I1014 07:21:46.763605   13076 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.key
	I1014 07:21:46.763605   13076 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.crt with IP's: []
	I1014 07:21:46.955865   13076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.crt ...
	I1014 07:21:46.955865   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.crt: {Name:mk4a5e51a4b9d62279c7ef4ef47716bcdf88d704 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:46.957857   13076 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.key ...
	I1014 07:21:46.957857   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.key: {Name:mkc80e99cfe4d8d17198a87fb188cfa1a72d727d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:46.958881   13076 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61
	I1014 07:21:46.958881   13076 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.108.120 172.20.111.254]
	I1014 07:21:47.375660   13076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61 ...
	I1014 07:21:47.375660   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61: {Name:mke858ab0ac103663123708f9814cfdffe03211e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.377216   13076 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61 ...
	I1014 07:21:47.377216   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61: {Name:mk972fca892a193d6b4e84d42e27ed86ce9f0457 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.377811   13076 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt
	I1014 07:21:47.391806   13076 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key
	I1014 07:21:47.392823   13076 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key
	I1014 07:21:47.392823   13076 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt with IP's: []
	I1014 07:21:47.675751   13076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt ...
	I1014 07:21:47.675751   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt: {Name:mk5a9f1239213dd0a0f36b4ee41d5ac56cbd79c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.677918   13076 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key ...
	I1014 07:21:47.677918   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key: {Name:mkb931094180fef1516b0d43acb6430ecbcbb3fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 07:21:47.679943   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 07:21:47.679943   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 07:21:47.679943   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 07:21:47.691415   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 07:21:47.693468   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem (1338 bytes)
	W1014 07:21:47.693468   13076 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936_empty.pem, impossibly tiny 0 bytes
	I1014 07:21:47.694004   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1014 07:21:47.694400   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1014 07:21:47.694671   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1014 07:21:47.695091   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1014 07:21:47.695960   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem (1708 bytes)
	I1014 07:21:47.696145   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem -> /usr/share/ca-certificates/936.pem
	I1014 07:21:47.696401   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /usr/share/ca-certificates/9362.pem
	I1014 07:21:47.696580   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:47.697580   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 07:21:47.742705   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 07:21:47.790988   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 07:21:47.837975   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 07:21:47.884872   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 07:21:47.936878   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 07:21:47.985330   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 07:21:48.029181   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 07:21:48.078824   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem --> /usr/share/ca-certificates/936.pem (1338 bytes)
	I1014 07:21:48.127907   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /usr/share/ca-certificates/9362.pem (1708 bytes)
	I1014 07:21:48.174542   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 07:21:48.218635   13076 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 07:21:48.264009   13076 ssh_runner.go:195] Run: openssl version
	I1014 07:21:48.282939   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 07:21:48.310256   13076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:48.318050   13076 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:44 /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:48.328768   13076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:48.348090   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 07:21:48.376484   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936.pem && ln -fs /usr/share/ca-certificates/936.pem /etc/ssl/certs/936.pem"
	I1014 07:21:48.408494   13076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936.pem
	I1014 07:21:48.416513   13076 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 14:00 /usr/share/ca-certificates/936.pem
	I1014 07:21:48.427815   13076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936.pem
	I1014 07:21:48.447033   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/936.pem /etc/ssl/certs/51391683.0"
	I1014 07:21:48.474293   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9362.pem && ln -fs /usr/share/ca-certificates/9362.pem /etc/ssl/certs/9362.pem"
	I1014 07:21:48.502293   13076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9362.pem
	I1014 07:21:48.509688   13076 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 14:00 /usr/share/ca-certificates/9362.pem
	I1014 07:21:48.519811   13076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9362.pem
	I1014 07:21:48.540403   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9362.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 07:21:48.569007   13076 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 07:21:48.575758   13076 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 07:21:48.576118   13076 kubeadm.go:392] StartCluster: {Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clu
sterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:21:48.584978   13076 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1014 07:21:48.619430   13076 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 07:21:48.647606   13076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 07:21:48.676291   13076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 07:21:48.693679   13076 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 07:21:48.693679   13076 kubeadm.go:157] found existing configuration files:
	
	I1014 07:21:48.703613   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 07:21:48.722067   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 07:21:48.732618   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 07:21:48.759836   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 07:21:48.777622   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 07:21:48.789748   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 07:21:48.818512   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 07:21:48.837396   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 07:21:48.848696   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 07:21:48.876926   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 07:21:48.893530   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 07:21:48.904372   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 07:21:48.920210   13076 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 07:21:49.392185   13076 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 07:22:04.140256   13076 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 07:22:04.140412   13076 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 07:22:04.140501   13076 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 07:22:04.140848   13076 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 07:22:04.141206   13076 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 07:22:04.141311   13076 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 07:22:04.144180   13076 out.go:235]   - Generating certificates and keys ...
	I1014 07:22:04.144373   13076 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 07:22:04.144575   13076 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 07:22:04.144939   13076 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 07:22:04.145172   13076 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1014 07:22:04.145315   13076 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1014 07:22:04.145521   13076 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1014 07:22:04.145845   13076 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1014 07:22:04.146212   13076 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-132600 localhost] and IPs [172.20.108.120 127.0.0.1 ::1]
	I1014 07:22:04.146365   13076 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1014 07:22:04.146614   13076 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-132600 localhost] and IPs [172.20.108.120 127.0.0.1 ::1]
	I1014 07:22:04.147003   13076 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 07:22:04.147200   13076 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 07:22:04.147236   13076 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1014 07:22:04.147236   13076 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 07:22:04.147236   13076 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 07:22:04.147236   13076 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 07:22:04.147954   13076 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 07:22:04.147987   13076 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 07:22:04.147987   13076 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 07:22:04.147987   13076 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 07:22:04.148785   13076 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 07:22:04.151235   13076 out.go:235]   - Booting up control plane ...
	I1014 07:22:04.151235   13076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 07:22:04.151819   13076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 07:22:04.152046   13076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 07:22:04.152333   13076 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 07:22:04.153426   13076 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 523.263843ms
	I1014 07:22:04.153534   13076 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 07:22:04.153534   13076 kubeadm.go:310] [api-check] The API server is healthy after 9.176461865s
	I1014 07:22:04.153534   13076 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 07:22:04.154243   13076 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 07:22:04.154243   13076 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 07:22:04.154779   13076 kubeadm.go:310] [mark-control-plane] Marking the node ha-132600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 07:22:04.154779   13076 kubeadm.go:310] [bootstrap-token] Using token: rm3d8a.inqsemtt5b5l5x2e
	I1014 07:22:04.157926   13076 out.go:235]   - Configuring RBAC rules ...
	I1014 07:22:04.158760   13076 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 07:22:04.158928   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 07:22:04.158928   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 07:22:04.159656   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 07:22:04.159851   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 07:22:04.160089   13076 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 07:22:04.160270   13076 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 07:22:04.160529   13076 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 07:22:04.160529   13076 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 07:22:04.160529   13076 kubeadm.go:310] 
	I1014 07:22:04.160529   13076 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 07:22:04.160529   13076 kubeadm.go:310] 
	I1014 07:22:04.160529   13076 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 07:22:04.160529   13076 kubeadm.go:310] 
	I1014 07:22:04.161100   13076 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 07:22:04.161186   13076 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 07:22:04.161352   13076 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 07:22:04.161352   13076 kubeadm.go:310] 
	I1014 07:22:04.161534   13076 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 07:22:04.161534   13076 kubeadm.go:310] 
	I1014 07:22:04.161706   13076 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 07:22:04.161706   13076 kubeadm.go:310] 
	I1014 07:22:04.161950   13076 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 07:22:04.161950   13076 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 07:22:04.161950   13076 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 07:22:04.161950   13076 kubeadm.go:310] 
	I1014 07:22:04.162515   13076 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 07:22:04.162607   13076 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 07:22:04.162607   13076 kubeadm.go:310] 
	I1014 07:22:04.162607   13076 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rm3d8a.inqsemtt5b5l5x2e \
	I1014 07:22:04.162607   13076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f876f951efbe7f1ed47ca578ee0d33e6ce8fe69c1a008a8154ee00f56ac84e46 \
	I1014 07:22:04.163162   13076 kubeadm.go:310] 	--control-plane 
	I1014 07:22:04.163261   13076 kubeadm.go:310] 
	I1014 07:22:04.163340   13076 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 07:22:04.163340   13076 kubeadm.go:310] 
	I1014 07:22:04.163570   13076 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rm3d8a.inqsemtt5b5l5x2e \
	I1014 07:22:04.163570   13076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f876f951efbe7f1ed47ca578ee0d33e6ce8fe69c1a008a8154ee00f56ac84e46 
	I1014 07:22:04.163570   13076 cni.go:84] Creating CNI manager for ""
	I1014 07:22:04.163570   13076 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 07:22:04.167124   13076 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1014 07:22:04.179134   13076 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 07:22:04.187701   13076 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1014 07:22:04.187701   13076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1014 07:22:04.233053   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1014 07:22:04.912894   13076 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 07:22:04.925574   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-132600 minikube.k8s.io/updated_at=2024_10_14T07_22_04_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=ha-132600 minikube.k8s.io/primary=true
	I1014 07:22:04.927795   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:04.958241   13076 ops.go:34] apiserver oom_adj: -16
	I1014 07:22:05.168034   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:05.667048   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:06.166157   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:06.667561   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:07.167283   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:07.305312   13076 kubeadm.go:1113] duration metric: took 2.3924152s to wait for elevateKubeSystemPrivileges
	I1014 07:22:07.306284   13076 kubeadm.go:394] duration metric: took 18.730142s to StartCluster
	I1014 07:22:07.306284   13076 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:22:07.306284   13076 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:22:07.308077   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:22:07.309777   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 07:22:07.309777   13076 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:22:07.309953   13076 start.go:241] waiting for startup goroutines ...
	I1014 07:22:07.309868   13076 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 07:22:07.310143   13076 addons.go:69] Setting storage-provisioner=true in profile "ha-132600"
	I1014 07:22:07.310235   13076 addons.go:234] Setting addon storage-provisioner=true in "ha-132600"
	I1014 07:22:07.310143   13076 addons.go:69] Setting default-storageclass=true in profile "ha-132600"
	I1014 07:22:07.310402   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:22:07.310402   13076 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-132600"
	I1014 07:22:07.310402   13076 host.go:66] Checking if "ha-132600" exists ...
	I1014 07:22:07.311457   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:07.311926   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:07.495748   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.96.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 07:22:08.130501   13076 start.go:971] {"host.minikube.internal": 172.20.96.1} host record injected into CoreDNS's ConfigMap
	I1014 07:22:09.594501   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:09.594501   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:09.594786   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:09.594878   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:09.595740   13076 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:22:09.596691   13076 kapi.go:59] client config for ha-132600: &rest.Config{Host:"https://172.20.111.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-132600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-132600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2926ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 07:22:09.597939   13076 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:22:09.598267   13076 cert_rotation.go:140] Starting client certificate rotation controller
	I1014 07:22:09.598703   13076 addons.go:234] Setting addon default-storageclass=true in "ha-132600"
	I1014 07:22:09.598956   13076 host.go:66] Checking if "ha-132600" exists ...
	I1014 07:22:09.600143   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:09.600143   13076 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 07:22:09.600247   13076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 07:22:09.600352   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:11.901872   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:11.901872   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:11.902855   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:22:11.913525   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:11.914554   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:11.914725   13076 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 07:22:11.914841   13076 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 07:22:11.915002   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:14.163865   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:14.163865   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:14.163865   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:22:14.597752   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:22:14.597752   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:14.598273   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:22:14.755906   13076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 07:22:16.766331   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:22:16.766798   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:16.767079   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:22:16.907784   13076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 07:22:17.107355   13076 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 07:22:17.107355   13076 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 07:22:17.108322   13076 round_trippers.go:463] GET https://172.20.111.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1014 07:22:17.108322   13076 round_trippers.go:469] Request Headers:
	I1014 07:22:17.108322   13076 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:22:17.108322   13076 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:22:17.120122   13076 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1014 07:22:17.120927   13076 round_trippers.go:463] PUT https://172.20.111.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1014 07:22:17.121006   13076 round_trippers.go:469] Request Headers:
	I1014 07:22:17.121006   13076 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:22:17.121006   13076 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:22:17.121089   13076 round_trippers.go:473]     Content-Type: application/json
	I1014 07:22:17.124876   13076 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 07:22:17.128782   13076 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1014 07:22:17.133792   13076 addons.go:510] duration metric: took 9.8239112s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1014 07:22:17.133792   13076 start.go:246] waiting for cluster config update ...
	I1014 07:22:17.133792   13076 start.go:255] writing updated cluster config ...
	I1014 07:22:17.140793   13076 out.go:201] 
	I1014 07:22:17.150794   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:22:17.151784   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:22:17.157758   13076 out.go:177] * Starting "ha-132600-m02" control-plane node in "ha-132600" cluster
	I1014 07:22:17.160381   13076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:22:17.160381   13076 cache.go:56] Caching tarball of preloaded images
	I1014 07:22:17.160381   13076 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1014 07:22:17.160381   13076 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:22:17.160381   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:22:17.168672   13076 start.go:360] acquireMachinesLock for ha-132600-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:22:17.169194   13076 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-132600-m02"
	I1014 07:22:17.169334   13076 start.go:93] Provisioning new machine with config: &{Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:22:17.169334   13076 start.go:125] createHost starting for "m02" (driver="hyperv")
	I1014 07:22:17.171957   13076 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:22:17.171957   13076 start.go:159] libmachine.API.Create for "ha-132600" (driver="hyperv")
	I1014 07:22:17.171957   13076 client.go:168] LocalClient.Create starting
	I1014 07:22:17.171957   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:22:17.173802   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:22:17.173802   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1014 07:22:19.094383   13076 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1014 07:22:19.094383   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:19.094970   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1014 07:22:20.808796   13076 main.go:141] libmachine: [stdout =====>] : False
	
	I1014 07:22:20.808796   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:20.808897   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:22:22.292959   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:22:22.292959   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:22.293074   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:22:25.811891   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:22:25.811891   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:25.813583   13076 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 07:22:26.334218   13076 main.go:141] libmachine: Creating SSH key...
	I1014 07:22:26.579833   13076 main.go:141] libmachine: Creating VM...
	I1014 07:22:26.580305   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:22:29.402237   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:22:29.402237   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:29.402237   13076 main.go:141] libmachine: Using switch "Default Switch"
	I1014 07:22:29.402237   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:22:31.181025   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:22:31.181025   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:31.181025   13076 main.go:141] libmachine: Creating VHD
	I1014 07:22:31.181025   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I1014 07:22:34.986705   13076 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 9BEBB030-6569-4A5D-A43D-C976E242B24D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1014 07:22:34.986953   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:34.986953   13076 main.go:141] libmachine: Writing magic tar header
	I1014 07:22:34.986953   13076 main.go:141] libmachine: Writing SSH key tar header
	I1014 07:22:34.998864   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I1014 07:22:38.117497   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:38.117497   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:38.117497   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\disk.vhd' -SizeBytes 20000MB
	I1014 07:22:40.750048   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:40.750048   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:40.750048   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-132600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1014 07:22:44.293371   13076 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-132600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1014 07:22:44.294009   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:44.294158   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-132600-m02 -DynamicMemoryEnabled $false
	I1014 07:22:46.495846   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:46.496106   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:46.496211   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-132600-m02 -Count 2
	I1014 07:22:48.637702   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:48.637702   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:48.637702   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-132600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\boot2docker.iso'
	I1014 07:22:51.128606   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:51.128606   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:51.128606   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-132600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\disk.vhd'
	I1014 07:22:53.765202   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:53.765775   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:53.765775   13076 main.go:141] libmachine: Starting VM...
	I1014 07:22:53.765847   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-132600-m02
	I1014 07:22:56.961396   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:56.961396   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:56.962040   13076 main.go:141] libmachine: Waiting for host to start...
	I1014 07:22:56.962040   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:22:59.251592   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:59.251592   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:59.251838   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:01.758067   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:01.758159   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:02.758293   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:04.921490   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:04.921565   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:04.921802   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:07.441595   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:07.441595   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:08.443053   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:10.659721   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:10.659721   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:10.660418   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:13.126618   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:13.126952   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:14.127183   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:16.307414   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:16.307414   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:16.307500   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:18.748902   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:18.749678   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:19.749962   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:21.893323   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:21.893323   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:21.893323   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:24.464351   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:24.465392   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:24.465471   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:26.567626   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:26.567626   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:26.567626   13076 machine.go:93] provisionDockerMachine start ...
	I1014 07:23:26.568400   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:28.681591   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:28.681591   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:28.682618   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:31.178308   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:31.178308   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:31.192309   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:31.207523   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:31.208024   13076 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 07:23:31.337861   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 07:23:31.337861   13076 buildroot.go:166] provisioning hostname "ha-132600-m02"
	I1014 07:23:31.337941   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:33.402839   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:33.402839   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:33.403340   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:35.894312   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:35.894312   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:35.901210   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:35.901699   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:35.901823   13076 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-132600-m02 && echo "ha-132600-m02" | sudo tee /etc/hostname
	I1014 07:23:36.071040   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-132600-m02
	
	I1014 07:23:36.071040   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:38.150707   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:38.151729   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:38.151729   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:40.639242   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:40.639242   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:40.644598   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:40.645419   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:40.645490   13076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-132600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-132600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-132600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 07:23:40.801974   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 07:23:40.801974   13076 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1014 07:23:40.801974   13076 buildroot.go:174] setting up certificates
	I1014 07:23:40.801974   13076 provision.go:84] configureAuth start
	I1014 07:23:40.801974   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:42.869101   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:42.869101   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:42.869101   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:45.382211   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:45.382688   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:45.382688   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:47.501182   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:47.501778   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:47.501832   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:50.008838   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:50.008838   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:50.009193   13076 provision.go:143] copyHostCerts
	I1014 07:23:50.009346   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1014 07:23:50.009346   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1014 07:23:50.009346   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1014 07:23:50.010015   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1014 07:23:50.011395   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1014 07:23:50.011967   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1014 07:23:50.011967   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1014 07:23:50.011967   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1014 07:23:50.013212   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1014 07:23:50.014109   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1014 07:23:50.014109   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1014 07:23:50.014507   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I1014 07:23:50.015649   13076 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-132600-m02 san=[127.0.0.1 172.20.111.83 ha-132600-m02 localhost minikube]
	I1014 07:23:50.130152   13076 provision.go:177] copyRemoteCerts
	I1014 07:23:50.140319   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 07:23:50.140319   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:52.242533   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:52.242533   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:52.243067   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:54.757999   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:54.758132   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:54.758132   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:23:54.867658   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7273333s)
	I1014 07:23:54.867658   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1014 07:23:54.867658   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 07:23:54.915126   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1014 07:23:54.915808   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 07:23:54.962775   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1014 07:23:54.963449   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 07:23:55.012733   13076 provision.go:87] duration metric: took 14.2107408s to configureAuth
	I1014 07:23:55.012733   13076 buildroot.go:189] setting minikube options for container-runtime
	I1014 07:23:55.013734   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:23:55.013734   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:57.174688   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:57.174688   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:57.175736   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:59.668037   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:59.668597   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:59.675074   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:59.675608   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:59.675873   13076 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1014 07:23:59.826626   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1014 07:23:59.826626   13076 buildroot.go:70] root file system type: tmpfs
	I1014 07:23:59.826926   13076 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1014 07:23:59.827038   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:01.963890   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:01.964568   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:01.964568   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:04.515125   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:04.515943   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:04.521824   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:04.522248   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:04.522248   13076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.20.108.120"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1014 07:24:04.691427   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.20.108.120
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1014 07:24:04.692090   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:06.819823   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:06.820339   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:06.820339   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:09.321630   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:09.321630   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:09.326748   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:09.327748   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:09.327808   13076 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1014 07:24:11.561493   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1014 07:24:11.561493   13076 machine.go:96] duration metric: took 44.993809s to provisionDockerMachine
	I1014 07:24:11.561493   13076 client.go:171] duration metric: took 1m54.3893961s to LocalClient.Create
	I1014 07:24:11.561493   13076 start.go:167] duration metric: took 1m54.3893961s to libmachine.API.Create "ha-132600"
	I1014 07:24:11.561493   13076 start.go:293] postStartSetup for "ha-132600-m02" (driver="hyperv")
	I1014 07:24:11.561493   13076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 07:24:11.572490   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 07:24:11.572490   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:13.652544   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:13.653401   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:13.653599   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:16.188638   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:16.188638   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:16.189304   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:24:16.298812   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7262867s)
	I1014 07:24:16.310231   13076 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 07:24:16.316927   13076 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 07:24:16.316927   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1014 07:24:16.317465   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1014 07:24:16.318548   13076 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> 9362.pem in /etc/ssl/certs
	I1014 07:24:16.318548   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /etc/ssl/certs/9362.pem
	I1014 07:24:16.332285   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 07:24:16.351469   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /etc/ssl/certs/9362.pem (1708 bytes)
	I1014 07:24:16.401820   13076 start.go:296] duration metric: took 4.8403207s for postStartSetup
	I1014 07:24:16.404096   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:18.500218   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:18.501234   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:18.501234   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:20.974982   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:20.975125   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:20.975125   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:24:20.978124   13076 start.go:128] duration metric: took 2m3.8086365s to createHost
	I1014 07:24:20.978209   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:23.112937   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:23.113001   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:23.113001   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:25.636675   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:25.637206   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:25.641953   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:25.642670   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:25.642670   13076 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 07:24:25.768945   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728915865.766702127
	
	I1014 07:24:25.769174   13076 fix.go:216] guest clock: 1728915865.766702127
	I1014 07:24:25.769174   13076 fix.go:229] Guest: 2024-10-14 07:24:25.766702127 -0700 PDT Remote: 2024-10-14 07:24:20.978124 -0700 PDT m=+320.730336401 (delta=4.788578127s)
	I1014 07:24:25.769174   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:27.845838   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:27.845838   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:27.845838   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:30.364175   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:30.364263   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:30.369382   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:30.369967   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:30.370049   13076 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1728915865
	I1014 07:24:30.508807   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 14 14:24:25 UTC 2024
	
	I1014 07:24:30.508807   13076 fix.go:236] clock set: Mon Oct 14 14:24:25 UTC 2024
	 (err=<nil>)
	I1014 07:24:30.508807   13076 start.go:83] releasing machines lock for "ha-132600-m02", held for 2m13.3394475s
	I1014 07:24:30.508807   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:32.570516   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:32.570516   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:32.570516   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:35.081338   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:35.081338   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:35.084330   13076 out.go:177] * Found network options:
	I1014 07:24:35.087197   13076 out.go:177]   - NO_PROXY=172.20.108.120
	W1014 07:24:35.089531   13076 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 07:24:35.091879   13076 out.go:177]   - NO_PROXY=172.20.108.120
	W1014 07:24:35.094409   13076 proxy.go:119] fail to check proxy env: Error ip not in block
	W1014 07:24:35.095622   13076 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 07:24:35.098960   13076 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1014 07:24:35.099124   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:35.109488   13076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1014 07:24:35.109488   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:37.293745   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:37.293818   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:37.293870   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:37.344951   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:37.344951   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:37.344951   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:39.933767   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:39.934139   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:39.934445   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:24:39.999458   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:39.999458   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:39.999458   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:24:40.039802   13076 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.930308s)
	W1014 07:24:40.039802   13076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 07:24:40.051889   13076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 07:24:40.057601   13076 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9585844s)
	W1014 07:24:40.057601   13076 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1014 07:24:40.090774   13076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 07:24:40.090774   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:24:40.091315   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:24:40.139757   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1014 07:24:40.172033   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1014 07:24:40.178804   13076 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1014 07:24:40.178804   13076 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1014 07:24:40.194040   13076 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 07:24:40.209169   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 07:24:40.241042   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:24:40.273436   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 07:24:40.304842   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:24:40.336705   13076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 07:24:40.371635   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 07:24:40.404378   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 07:24:40.435805   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 07:24:40.485319   13076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 07:24:40.505473   13076 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 07:24:40.516975   13076 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 07:24:40.550905   13076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 07:24:40.581632   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:24:40.785773   13076 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 07:24:40.819978   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:24:40.830992   13076 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1014 07:24:40.876801   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:24:40.913930   13076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 07:24:40.966444   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:24:41.006432   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:24:41.044531   13076 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 07:24:41.114617   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:24:41.139995   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:24:41.187779   13076 ssh_runner.go:195] Run: which cri-dockerd
	I1014 07:24:41.207367   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1014 07:24:41.226988   13076 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1014 07:24:41.272924   13076 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1014 07:24:41.470204   13076 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1014 07:24:41.650925   13076 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1014 07:24:41.650925   13076 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1014 07:24:41.694417   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:24:41.893295   13076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:25:43.008408   13076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.115034s)
	I1014 07:25:43.020423   13076 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1014 07:25:43.059257   13076 out.go:201] 
	W1014 07:25:43.062124   13076 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 14 14:24:09 ha-132600-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.938489944Z" level=info msg="Starting up"
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.941531731Z" level=info msg="containerd not running, starting managed containerd"
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.942638427Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	Oct 14 14:24:09 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:09.976195986Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.003978070Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004049170Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004119469Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004233069Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004318268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004457268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004664767Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004761167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004782067Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004795467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004887666Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.005331964Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008453752Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008545052Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008673551Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008759151Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008859951Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008988550Z" level=info msg="metadata content store policy set" policy=shared
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034095951Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034335550Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034407650Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034432250Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034449450Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034603849Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034945948Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035129847Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035263947Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035294447Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035312846Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035328546Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035343646Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035419846Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035443346Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035495946Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035526946Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035542746Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035565345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035583545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035598745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035630745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035648245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035663945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035689645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035709545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035731545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035750545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035764245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035777845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035792145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035809345Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035833944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035849444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035864444Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036025444Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036057644Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036074443Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036087843Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036099043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036115343Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036131843Z" level=info msg="NRI interface is disabled by configuration."
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036482842Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036737141Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036799541Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036822041Z" level=info msg="containerd successfully booted in 0.062283s"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.009541514Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.046546477Z" level=info msg="Loading containers: start."
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.205429391Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.426461627Z" level=info msg="Loading containers: done."
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448023414Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448125314Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448200613Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448511212Z" level=info msg="Daemon has completed initialization"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.556153647Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.556338446Z" level=info msg="API listen on [::]:2376"
	Oct 14 14:24:11 ha-132600-m02 systemd[1]: Started Docker Application Container Engine.
	Oct 14 14:24:41 ha-132600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.916201367Z" level=info msg="Processing signal 'terminated'"
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.918697964Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919058063Z" level=info msg="Daemon shutdown complete"
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919215663Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919252063Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: docker.service: Deactivated successfully.
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 14 14:24:42 ha-132600-m02 dockerd[1075]: time="2024-10-14T14:24:42.978494717Z" level=info msg="Starting up"
	Oct 14 14:25:43 ha-132600-m02 dockerd[1075]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1014 07:25:43.062124   13076 out.go:270] * 
	W1014 07:25:43.062827   13076 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:25:43.067027   13076 out.go:201] 
	
	
	==> Docker <==
	Oct 14 14:22:32 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:22:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d7137fb8ec569089599a03e11d18eda509d210d29fa54b5e6a0a8d7dd7a54e7f/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 14:22:32 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:22:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/90ffeb0ce1aafcb639832f7144bd2dbb0b5c9deb53555b49e386b3d3e96d8bc3/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 14:22:32 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:22:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7278ecf7cc9466e1fee2170d699055f6498d2ece8da31c4b3696ed85b3cd38cf/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922383735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922463635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922481235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922580735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.996609949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.996807948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.996907948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.998275745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.069443031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.070022429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.070369527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.076841098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675114485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675249284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675264584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675974482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:17 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:26:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/91751edf004eb5236aa2443a5cff30bdc82ba05d39a3583d9026a4f19faba52f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 14 14:26:19 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:26:19Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.456978196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.457173796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.457616296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.457980197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2b8d44d586369       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   91751edf004eb       busybox-7dff88458-kr92j
	5a6196684fc6e       c69fa2e9cbf5f                                                                                         15 minutes ago      Running             coredns                   0                   7278ecf7cc946       coredns-7c65d6cfc9-4qfrq
	81d6fdac8115f       6e38f40d628db                                                                                         15 minutes ago      Running             storage-provisioner       0                   90ffeb0ce1aaf       storage-provisioner
	52c3e5370a6c8       c69fa2e9cbf5f                                                                                         15 minutes ago      Running             coredns                   0                   d7137fb8ec569       coredns-7c65d6cfc9-zf6cd
	dae2c0aa67af3       kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387              16 minutes ago      Running             kindnet-cni               0                   277e64b59b8aa       kindnet-rkjqr
	4745a4b0dc379       60c005f310ff3                                                                                         16 minutes ago      Running             kube-proxy                0                   15f605d31df67       kube-proxy-zkbj8
	b08f7a3c2a5a6       ghcr.io/kube-vip/kube-vip@sha256:9e23baad11ae3e69d739430b9fdb60df22356b7da4b4f4e458fae0541619deb4     16 minutes ago      Running             kube-vip                  0                   46cfd7b278d40       kube-vip-ha-132600
	84fd76d493d80       2e96e5913fc06                                                                                         16 minutes ago      Running             etcd                      0                   d71566d00e2f9       etcd-ha-132600
	b661cb6713103       6bab7719df100                                                                                         16 minutes ago      Running             kube-apiserver            0                   cb18be6165830       kube-apiserver-ha-132600
	35c870864a800       9aa1fad941575                                                                                         16 minutes ago      Running             kube-scheduler            0                   36593363b631a       kube-scheduler-ha-132600
	4a8cce31aa3a2       175ffd71cce3d                                                                                         16 minutes ago      Running             kube-controller-manager   0                   f5fa69fe68b07       kube-controller-manager-ha-132600
	
	
	==> coredns [52c3e5370a6c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41885 - 43638 "HINFO IN 5147527986633681541.7511835962423692488. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.048565283s
	[INFO] 10.244.0.4:49801 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0004036s
	[INFO] 10.244.0.4:57121 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.178873632s
	[INFO] 10.244.0.4:54985 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000301s
	[INFO] 10.244.0.4:47603 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000273799s
	[INFO] 10.244.0.4:50030 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000276399s
	[INFO] 10.244.0.4:38124 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000439399s
	[INFO] 10.244.0.4:43764 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000347699s
	
	
	==> coredns [5a6196684fc6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50464 - 22593 "HINFO IN 8090798371432773748.7872157757733337044. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.084180423s
	[INFO] 10.244.0.4:46373 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.004105792s
	[INFO] 10.244.0.4:41175 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.166814894s
	[INFO] 10.244.0.4:43040 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002867s
	[INFO] 10.244.0.4:33549 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.045685805s
	[INFO] 10.244.0.4:60793 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000884598s
	[INFO] 10.244.0.4:57558 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001339s
	[INFO] 10.244.0.4:33138 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.040147217s
	[INFO] 10.244.0.4:46303 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0002673s
	[INFO] 10.244.0.4:60761 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001771s
	
	
	==> describe nodes <==
	Name:               ha-132600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-132600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=ha-132600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_14T07_22_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 14:22:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-132600
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:38:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 14:36:53 +0000   Mon, 14 Oct 2024 14:22:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 14:36:53 +0000   Mon, 14 Oct 2024 14:22:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 14:36:53 +0000   Mon, 14 Oct 2024 14:22:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 14:36:53 +0000   Mon, 14 Oct 2024 14:22:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.108.120
	  Hostname:    ha-132600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 07bcc58a0c7e4f0eaae96253dcf0a4ae
	  System UUID:                e06d00fc-5ebc-1a49-bf31-4c6ce500fe9c
	  Boot ID:                    215d765d-07e3-4bb7-ba3c-58ab142d5afa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kr92j              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-4qfrq             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-zf6cd             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-ha-132600                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kindnet-rkjqr                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	  kube-system                 kube-apiserver-ha-132600             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-132600    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-zkbj8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-132600             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-132600                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node ha-132600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node ha-132600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node ha-132600 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m   node-controller  Node ha-132600 event: Registered Node ha-132600 in Controller
	  Normal  NodeReady                15m   kubelet          Node ha-132600 status is now: NodeReady
	
	
	==> dmesg <==
	[  +1.245700] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.050916] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +45.956887] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.179434] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[Oct14 14:21] systemd-fstab-generator[997]: Ignoring "noauto" option for root device
	[  +0.095957] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.527367] systemd-fstab-generator[1037]: Ignoring "noauto" option for root device
	[  +0.185829] systemd-fstab-generator[1049]: Ignoring "noauto" option for root device
	[  +0.220100] systemd-fstab-generator[1063]: Ignoring "noauto" option for root device
	[  +2.802946] systemd-fstab-generator[1277]: Ignoring "noauto" option for root device
	[  +0.209648] systemd-fstab-generator[1289]: Ignoring "noauto" option for root device
	[  +0.207153] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.288223] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[ +11.411364] systemd-fstab-generator[1416]: Ignoring "noauto" option for root device
	[  +0.107418] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.752049] systemd-fstab-generator[1674]: Ignoring "noauto" option for root device
	[  +6.264384] systemd-fstab-generator[1823]: Ignoring "noauto" option for root device
	[  +0.102334] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.637673] kauditd_printk_skb: 67 callbacks suppressed
	[Oct14 14:22] systemd-fstab-generator[2317]: Ignoring "noauto" option for root device
	[  +5.137329] kauditd_printk_skb: 17 callbacks suppressed
	[  +8.104433] kauditd_printk_skb: 29 callbacks suppressed
	[Oct14 14:26] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [84fd76d493d8] <==
	{"level":"info","ts":"2024-10-14T14:21:55.755114Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T14:21:55.755455Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"1c0f11edc616a87e","local-member-attributes":"{Name:ha-132600 ClientURLs:[https://172.20.108.120:2379]}","request-path":"/0/members/1c0f11edc616a87e/attributes","cluster-id":"60f23df2979e3d4a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-14T14:21:55.755734Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T14:21:55.762346Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T14:21:55.764750Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-14T14:21:55.773938Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-14T14:21:55.771033Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T14:21:55.781509Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T14:21:55.790642Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.20.108.120:2379"}
	{"level":"info","ts":"2024-10-14T14:21:55.783509Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-14T14:22:07.879957Z","caller":"traceutil/trace.go:171","msg":"trace[597013235] transaction","detail":"{read_only:false; response_revision:340; number_of_response:1; }","duration":"128.695256ms","start":"2024-10-14T14:22:07.751244Z","end":"2024-10-14T14:22:07.879940Z","steps":["trace[597013235] 'process raft request'  (duration: 56.790725ms)","trace[597013235] 'compare'  (duration: 71.246831ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-14T14:22:07.880058Z","caller":"traceutil/trace.go:171","msg":"trace[2064377417] transaction","detail":"{read_only:false; response_revision:343; number_of_response:1; }","duration":"126.232155ms","start":"2024-10-14T14:22:07.753813Z","end":"2024-10-14T14:22:07.880045Z","steps":["trace[2064377417] 'process raft request'  (duration: 125.814455ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:22:07.880130Z","caller":"traceutil/trace.go:171","msg":"trace[152598700] transaction","detail":"{read_only:false; response_revision:341; number_of_response:1; }","duration":"128.322756ms","start":"2024-10-14T14:22:07.751800Z","end":"2024-10-14T14:22:07.880123Z","steps":["trace[152598700] 'process raft request'  (duration: 127.754056ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:22:07.880149Z","caller":"traceutil/trace.go:171","msg":"trace[1241907766] transaction","detail":"{read_only:false; response_revision:342; number_of_response:1; }","duration":"128.197556ms","start":"2024-10-14T14:22:07.751946Z","end":"2024-10-14T14:22:07.880144Z","steps":["trace[1241907766] 'process raft request'  (duration: 127.649056ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:22:13.374435Z","caller":"traceutil/trace.go:171","msg":"trace[979670185] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"443.350864ms","start":"2024-10-14T14:22:12.931064Z","end":"2024-10-14T14:22:13.374415Z","steps":["trace[979670185] 'process raft request'  (duration: 443.018064ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T14:22:13.376146Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-14T14:22:12.931047Z","time spent":"444.536163ms","remote":"127.0.0.1:55470","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5580,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-ha-132600\" mod_revision:367 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-ha-132600\" value_size:5531 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-ha-132600\" > >"}
	{"level":"info","ts":"2024-10-14T14:22:28.783322Z","caller":"traceutil/trace.go:171","msg":"trace[1840750524] transaction","detail":"{read_only:false; response_revision:403; number_of_response:1; }","duration":"157.257125ms","start":"2024-10-14T14:22:28.626047Z","end":"2024-10-14T14:22:28.783304Z","steps":["trace[1840750524] 'process raft request'  (duration: 157.088326ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:22:29.878049Z","caller":"traceutil/trace.go:171","msg":"trace[630262405] transaction","detail":"{read_only:false; response_revision:404; number_of_response:1; }","duration":"108.586835ms","start":"2024-10-14T14:22:29.769442Z","end":"2024-10-14T14:22:29.878029Z","steps":["trace[630262405] 'process raft request'  (duration: 107.997536ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:23:24.077614Z","caller":"traceutil/trace.go:171","msg":"trace[844030949] transaction","detail":"{read_only:false; response_revision:541; number_of_response:1; }","duration":"108.091259ms","start":"2024-10-14T14:23:23.969504Z","end":"2024-10-14T14:23:24.077595Z","steps":["trace[844030949] 'process raft request'  (duration: 107.87726ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:31:56.536139Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":948}
	{"level":"info","ts":"2024-10-14T14:31:56.621225Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":948,"took":"84.804509ms","hash":4240470983,"current-db-size-bytes":2416640,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2416640,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-10-14T14:31:56.621500Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4240470983,"revision":948,"compact-revision":-1}
	{"level":"info","ts":"2024-10-14T14:36:56.554994Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1485}
	{"level":"info","ts":"2024-10-14T14:36:56.566306Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1485,"took":"10.58958ms","hash":2881326002,"current-db-size-bytes":2416640,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1830912,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-10-14T14:36:56.566433Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2881326002,"revision":1485,"compact-revision":948}
	
	
	==> kernel <==
	 14:38:26 up 18 min,  0 users,  load average: 0.20, 0.33, 0.30
	Linux ha-132600 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [dae2c0aa67af] <==
	I1014 14:36:17.563197       1 main.go:300] handling current node
	I1014 14:36:27.569167       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:36:27.569280       1 main.go:300] handling current node
	I1014 14:36:37.563508       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:36:37.563559       1 main.go:300] handling current node
	I1014 14:36:47.568590       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:36:47.568749       1 main.go:300] handling current node
	I1014 14:36:57.564844       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:36:57.565008       1 main.go:300] handling current node
	I1014 14:37:07.573018       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:37:07.573144       1 main.go:300] handling current node
	I1014 14:37:17.564003       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:37:17.564067       1 main.go:300] handling current node
	I1014 14:37:27.563269       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:37:27.563423       1 main.go:300] handling current node
	I1014 14:37:37.564967       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:37:37.565195       1 main.go:300] handling current node
	I1014 14:37:47.567116       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:37:47.567209       1 main.go:300] handling current node
	I1014 14:37:57.565444       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:37:57.565503       1 main.go:300] handling current node
	I1014 14:38:07.568761       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:38:07.568963       1 main.go:300] handling current node
	I1014 14:38:17.563315       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:38:17.563401       1 main.go:300] handling current node
	
	
	==> kube-apiserver [b661cb671310] <==
	I1014 14:21:58.913061       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1014 14:21:58.919034       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1014 14:21:58.925388       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1014 14:21:58.925426       1 policy_source.go:224] refreshing policies
	I1014 14:21:58.928296       1 controller.go:615] quota admission added evaluator for: namespaces
	E1014 14:21:58.950810       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1014 14:21:59.174958       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 14:21:59.819127       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1014 14:21:59.829797       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1014 14:21:59.829835       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 14:22:00.942976       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 14:22:01.015766       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 14:22:01.150423       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1014 14:22:01.164901       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.20.108.120]
	I1014 14:22:01.165818       1 controller.go:615] quota admission added evaluator for: endpoints
	I1014 14:22:01.175247       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 14:22:01.835583       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1014 14:22:03.554010       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1014 14:22:03.589407       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1014 14:22:03.617368       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1014 14:22:07.346752       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1014 14:22:07.516653       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1014 14:38:03.250866       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57382: use of closed network connection
	E1014 14:38:04.469699       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57390: use of closed network connection
	E1014 14:38:05.596658       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57398: use of closed network connection
	
	
	==> kube-controller-manager [4a8cce31aa3a] <==
	I1014 14:22:07.963572       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="72.664531ms"
	I1014 14:22:07.964409       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="234.3µs"
	I1014 14:22:31.675332       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600"
	I1014 14:22:31.702335       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600"
	I1014 14:22:31.722232       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="290.099µs"
	I1014 14:22:31.739820       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="1.478096ms"
	I1014 14:22:31.765093       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="242.4µs"
	I1014 14:22:31.815158       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="67µs"
	I1014 14:22:31.836955       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1014 14:22:33.199416       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="65.9µs"
	I1014 14:22:34.284134       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="108.4µs"
	I1014 14:22:34.352001       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="27.147579ms"
	I1014 14:22:34.400543       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="48.068986ms"
	I1014 14:22:34.400805       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="105.799µs"
	I1014 14:22:34.401110       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="152.899µs"
	I1014 14:22:34.558208       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600"
	I1014 14:26:17.151439       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="87.464595ms"
	I1014 14:26:17.170321       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.788035ms"
	I1014 14:26:17.170417       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.299µs"
	I1014 14:26:17.175584       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.499µs"
	I1014 14:26:20.622046       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.44731ms"
	I1014 14:26:20.622317       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="202µs"
	I1014 14:26:40.517161       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600"
	I1014 14:31:47.122690       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600"
	I1014 14:36:53.521795       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600"
	
	
	==> kube-proxy [4745a4b0dc37] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1014 14:22:09.024513       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1014 14:22:09.047174       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.20.108.120"]
	E1014 14:22:09.047308       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 14:22:09.124346       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 14:22:09.124494       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 14:22:09.124529       1 server_linux.go:169] "Using iptables Proxier"
	I1014 14:22:09.128809       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 14:22:09.129721       1 server.go:483] "Version info" version="v1.31.1"
	I1014 14:22:09.129805       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 14:22:09.131652       1 config.go:199] "Starting service config controller"
	I1014 14:22:09.131988       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 14:22:09.132269       1 config.go:105] "Starting endpoint slice config controller"
	I1014 14:22:09.132619       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 14:22:09.133592       1 config.go:328] "Starting node config controller"
	I1014 14:22:09.135184       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 14:22:09.232621       1 shared_informer.go:320] Caches are synced for service config
	I1014 14:22:09.233933       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 14:22:09.236054       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [35c870864a80] <==
	W1014 14:21:59.944424       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1014 14:21:59.944452       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:21:59.979078       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1014 14:21:59.979365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.111250       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1014 14:22:00.111664       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.192015       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1014 14:22:00.192346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.196984       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1014 14:22:00.197112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.197208       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1014 14:22:00.197241       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.212172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1014 14:22:00.212214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.260144       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1014 14:22:00.261002       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.276235       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1014 14:22:00.276419       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.295031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1014 14:22:00.295892       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.311664       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1014 14:22:00.313212       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1014 14:22:00.354351       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1014 14:22:00.355204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 14:22:02.117335       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 14 14:34:03 ha-132600 kubelet[2324]: E1014 14:34:03.674135    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:34:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:34:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:34:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:34:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:35:03 ha-132600 kubelet[2324]: E1014 14:35:03.675096    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:35:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:35:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:35:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:35:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:36:03 ha-132600 kubelet[2324]: E1014 14:36:03.678469    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:36:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:36:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:36:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:36:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:37:03 ha-132600 kubelet[2324]: E1014 14:37:03.677822    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:37:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:37:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:37:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:37:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:38:03 ha-132600 kubelet[2324]: E1014 14:38:03.684487    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:38:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:38:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:38:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:38:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [81d6fdac8115] <==
	I1014 14:22:33.432711       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 14:22:33.504323       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 14:22:33.505471       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1014 14:22:33.522254       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 14:22:33.522619       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-132600_6ba40601-1895-44e0-aab6-02c0536a0b9c!
	I1014 14:22:33.527769       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5c7ed5ed-4913-4d2f-8634-767d8aa0727d", APIVersion:"v1", ResourceVersion:"436", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-132600_6ba40601-1895-44e0-aab6-02c0536a0b9c became leader
	I1014 14:22:33.636551       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-132600_6ba40601-1895-44e0-aab6-02c0536a0b9c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-132600 -n ha-132600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-132600 -n ha-132600: (11.6783838s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-132600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-8thz6 busybox-7dff88458-rng7p
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeployApp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-132600 describe pod busybox-7dff88458-8thz6 busybox-7dff88458-rng7p
helpers_test.go:282: (dbg) kubectl --context ha-132600 describe pod busybox-7dff88458-8thz6 busybox-7dff88458-rng7p:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-8thz6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7r884 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-7r884:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  2m6s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-7dff88458-rng7p
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9hpj2 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-9hpj2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  2m6s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeployApp (742.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (44.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-132600 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-132600 -- exec busybox-7dff88458-8thz6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:207: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-132600 -- exec busybox-7dff88458-8thz6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (335.8861ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-7dff88458-8thz6 does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:209: Pod busybox-7dff88458-8thz6 could not resolve 'host.minikube.internal': exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-132600 -- exec busybox-7dff88458-kr92j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-132600 -- exec busybox-7dff88458-kr92j -- sh -c "ping -c 1 172.20.96.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-132600 -- exec busybox-7dff88458-kr92j -- sh -c "ping -c 1 172.20.96.1": exit status 1 (10.4349024s)

                                                
                                                
-- stdout --
	PING 172.20.96.1 (172.20.96.1): 56 data bytes
	
	--- 172.20.96.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.20.96.1) from pod (busybox-7dff88458-kr92j): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-132600 -- exec busybox-7dff88458-rng7p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:207: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-132600 -- exec busybox-7dff88458-rng7p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (349.4093ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-7dff88458-rng7p does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:209: Pod busybox-7dff88458-rng7p could not resolve 'host.minikube.internal': exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-132600 -n ha-132600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-132600 -n ha-132600: (11.6464437s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-132600 logs -n 25
E1014 07:39:10.845328     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-132600 logs -n 25: (8.2119717s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:37 PDT | 14 Oct 24 07:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:37 PDT | 14 Oct 24 07:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-8thz6 --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | busybox-7dff88458-kr92j --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-rng7p --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-8thz6 --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | busybox-7dff88458-kr92j --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-rng7p --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-8thz6 -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | busybox-7dff88458-kr92j -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-rng7p -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-8thz6              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | busybox-7dff88458-kr92j              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-kr92j -- sh        |           |                   |         |                     |                     |
	|         | -c ping -c 1 172.20.96.1             |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-rng7p              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 07:19:00
	Running on machine: minikube1
	Binary: Built with gc go1.23.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 07:19:00.342865   13076 out.go:345] Setting OutFile to fd 1228 ...
	I1014 07:19:00.344546   13076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:19:00.344546   13076 out.go:358] Setting ErrFile to fd 824...
	I1014 07:19:00.344546   13076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:19:00.368247   13076 out.go:352] Setting JSON to false
	I1014 07:19:00.372277   13076 start.go:129] hostinfo: {"hostname":"minikube1","uptime":101054,"bootTime":1728814485,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5011 Build 19045.5011","kernelVersion":"10.0.19045.5011 Build 19045.5011","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1014 07:19:00.372803   13076 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:19:00.379728   13076 out.go:177] * [ha-132600] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	I1014 07:19:00.386820   13076 notify.go:220] Checking for updates...
	I1014 07:19:00.389571   13076 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:19:00.392679   13076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:19:00.395167   13076 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1014 07:19:00.398285   13076 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:19:00.400520   13076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:19:00.404118   13076 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:19:05.654471   13076 out.go:177] * Using the hyperv driver based on user configuration
	I1014 07:19:05.658285   13076 start.go:297] selected driver: hyperv
	I1014 07:19:05.658285   13076 start.go:901] validating driver "hyperv" against <nil>
	I1014 07:19:05.658285   13076 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:19:05.705211   13076 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 07:19:05.706541   13076 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:19:05.706541   13076 cni.go:84] Creating CNI manager for ""
	I1014 07:19:05.706541   13076 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1014 07:19:05.706541   13076 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 07:19:05.707066   13076 start.go:340] cluster config:
	{Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:19:05.707207   13076 iso.go:125] acquiring lock: {Name:mkb71635bcbccba29dce9048ad4cc71430a7e577 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:19:05.711264   13076 out.go:177] * Starting "ha-132600" primary control-plane node in "ha-132600" cluster
	I1014 07:19:05.715097   13076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:19:05.715262   13076 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1014 07:19:05.715344   13076 cache.go:56] Caching tarball of preloaded images
	I1014 07:19:05.715452   13076 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1014 07:19:05.715452   13076 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:19:05.716553   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:19:05.716898   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json: {Name:mkbd11bf3f90adebf1f4630d2e8deaac328748d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:19:05.717886   13076 start.go:360] acquireMachinesLock for ha-132600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:19:05.717886   13076 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-132600"
	I1014 07:19:05.718562   13076 start.go:93] Provisioning new machine with config: &{Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:19:05.718562   13076 start.go:125] createHost starting for "" (driver="hyperv")
	I1014 07:19:05.721891   13076 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:19:05.722555   13076 start.go:159] libmachine.API.Create for "ha-132600" (driver="hyperv")
	I1014 07:19:05.722555   13076 client.go:168] LocalClient.Create starting
	I1014 07:19:05.722555   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I1014 07:19:05.723316   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:19:05.723316   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:19:05.723316   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I1014 07:19:05.723877   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:19:05.723955   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:19:05.723955   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1014 07:19:07.752927   13076 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1014 07:19:07.753576   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:07.753793   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1014 07:19:09.445838   13076 main.go:141] libmachine: [stdout =====>] : False
	
	I1014 07:19:09.445838   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:09.446315   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:19:10.901034   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:19:10.901034   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:10.901034   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:19:14.320078   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:19:14.320309   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:14.323253   13076 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 07:19:14.829443   13076 main.go:141] libmachine: Creating SSH key...
	I1014 07:19:14.988345   13076 main.go:141] libmachine: Creating VM...
	I1014 07:19:14.988345   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:19:17.790932   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:19:17.791005   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:17.791145   13076 main.go:141] libmachine: Using switch "Default Switch"
	I1014 07:19:17.791145   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:19:19.529861   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:19:19.529861   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:19.529861   13076 main.go:141] libmachine: Creating VHD
	I1014 07:19:19.530436   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\fixed.vhd' -SizeBytes 10MB -Fixed
	I1014 07:19:23.105904   13076 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 267BC11D-A195-4636-8AE9-EF1D7966C95C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1014 07:19:23.105987   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:23.106037   13076 main.go:141] libmachine: Writing magic tar header
	I1014 07:19:23.106037   13076 main.go:141] libmachine: Writing SSH key tar header
	I1014 07:19:23.118040   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\disk.vhd' -VHDType Dynamic -DeleteSource
	I1014 07:19:26.201609   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:26.201950   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:26.202023   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\disk.vhd' -SizeBytes 20000MB
	I1014 07:19:28.768368   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:28.768789   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:28.768969   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-132600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1014 07:19:32.226690   13076 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-132600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1014 07:19:32.226915   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:32.226915   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-132600 -DynamicMemoryEnabled $false
	I1014 07:19:34.390731   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:34.390837   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:34.391049   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-132600 -Count 2
	I1014 07:19:36.449382   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:36.449628   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:36.449628   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-132600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\boot2docker.iso'
	I1014 07:19:38.932909   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:38.932909   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:38.932909   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-132600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\disk.vhd'
	I1014 07:19:41.509229   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:41.509229   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:41.509229   13076 main.go:141] libmachine: Starting VM...
	I1014 07:19:41.509229   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-132600
	I1014 07:19:44.718680   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:44.718680   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:44.719475   13076 main.go:141] libmachine: Waiting for host to start...
	I1014 07:19:44.719770   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:19:46.954823   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:19:46.954823   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:46.955829   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:19:49.408227   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:49.408227   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:50.409369   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:19:52.561607   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:19:52.561912   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:52.561912   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:19:55.081136   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:55.081187   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:56.082401   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:19:58.271497   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:19:58.271497   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:58.272384   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:00.740863   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:20:00.741070   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:01.741536   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:03.902654   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:03.902721   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:03.902721   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:06.378064   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:20:06.378064   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:07.379104   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:09.537329   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:09.537329   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:09.537541   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:12.074320   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:12.074320   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:12.074834   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:14.171613   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:14.171613   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:14.171613   13076 machine.go:93] provisionDockerMachine start ...
	I1014 07:20:14.172377   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:16.225265   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:16.225265   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:16.225634   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:18.692061   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:18.692061   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:18.698261   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:18.713495   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:18.713568   13076 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 07:20:18.839311   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 07:20:18.839517   13076 buildroot.go:166] provisioning hostname "ha-132600"
	I1014 07:20:18.839517   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:20.911254   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:20.911424   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:20.911601   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:23.370138   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:23.370756   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:23.377008   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:23.377773   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:23.377773   13076 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-132600 && echo "ha-132600" | sudo tee /etc/hostname
	I1014 07:20:23.537548   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-132600
	
	I1014 07:20:23.537548   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:25.571429   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:25.571429   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:25.572326   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:28.068677   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:28.068677   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:28.077026   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:28.078615   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:28.078615   13076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-132600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-132600/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-132600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 07:20:28.216809   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 07:20:28.216874   13076 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1014 07:20:28.216990   13076 buildroot.go:174] setting up certificates
	I1014 07:20:28.217016   13076 provision.go:84] configureAuth start
	I1014 07:20:28.217047   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:30.246758   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:30.247093   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:30.247368   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:32.688428   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:32.688428   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:32.688428   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:34.748908   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:34.748908   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:34.749349   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:37.221628   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:37.221798   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:37.221798   13076 provision.go:143] copyHostCerts
	I1014 07:20:37.221998   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1014 07:20:37.222444   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1014 07:20:37.222558   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1014 07:20:37.223110   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1014 07:20:37.224109   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1014 07:20:37.224599   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1014 07:20:37.224599   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1014 07:20:37.224959   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I1014 07:20:37.225332   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1014 07:20:37.225332   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1014 07:20:37.225332   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1014 07:20:37.226765   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1014 07:20:37.227733   13076 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-132600 san=[127.0.0.1 172.20.108.120 ha-132600 localhost minikube]
	I1014 07:20:37.731674   13076 provision.go:177] copyRemoteCerts
	I1014 07:20:37.744706   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 07:20:37.744706   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:39.801309   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:39.801309   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:39.801309   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:42.273306   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:42.273367   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:42.273367   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:20:42.378794   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6340227s)
	I1014 07:20:42.378847   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1014 07:20:42.378847   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I1014 07:20:42.424772   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1014 07:20:42.425289   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 07:20:42.471397   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1014 07:20:42.471964   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 07:20:42.527104   13076 provision.go:87] duration metric: took 14.3100395s to configureAuth
	I1014 07:20:42.527104   13076 buildroot.go:189] setting minikube options for container-runtime
	I1014 07:20:42.527712   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:20:42.528252   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:44.598308   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:44.598308   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:44.599305   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:47.040424   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:47.040637   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:47.045726   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:47.046398   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:47.046398   13076 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1014 07:20:47.167710   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1014 07:20:47.167801   13076 buildroot.go:70] root file system type: tmpfs
	I1014 07:20:47.168208   13076 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1014 07:20:47.168315   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:49.202111   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:49.202331   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:49.202424   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:51.648278   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:51.648278   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:51.654248   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:51.655052   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:51.655052   13076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1014 07:20:51.814048   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1014 07:20:51.814229   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:53.871643   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:53.871643   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:53.872030   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:56.286054   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:56.286054   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:56.292357   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:56.293125   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:56.293125   13076 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1014 07:20:58.458028   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1014 07:20:58.458028   13076 machine.go:96] duration metric: took 44.286361s to provisionDockerMachine
	I1014 07:20:58.458028   13076 client.go:171] duration metric: took 1m52.7353371s to LocalClient.Create
	I1014 07:20:58.458028   13076 start.go:167] duration metric: took 1m52.7353371s to libmachine.API.Create "ha-132600"
	I1014 07:20:58.458028   13076 start.go:293] postStartSetup for "ha-132600" (driver="hyperv")
	I1014 07:20:58.458028   13076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 07:20:58.471262   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 07:20:58.471262   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:00.512590   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:00.513162   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:00.513321   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:02.924795   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:02.924929   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:02.925163   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:21:03.032582   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5613141s)
	I1014 07:21:03.044373   13076 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 07:21:03.050831   13076 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 07:21:03.050831   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1014 07:21:03.051271   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1014 07:21:03.052198   13076 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> 9362.pem in /etc/ssl/certs
	I1014 07:21:03.052295   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /etc/ssl/certs/9362.pem
	I1014 07:21:03.063873   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 07:21:03.081968   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /etc/ssl/certs/9362.pem (1708 bytes)
	I1014 07:21:03.128637   13076 start.go:296] duration metric: took 4.6706031s for postStartSetup
	I1014 07:21:03.132031   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:05.196102   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:05.196622   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:05.196696   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:07.624940   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:07.625781   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:07.626103   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:21:07.629046   13076 start.go:128] duration metric: took 2m1.9103356s to createHost
	I1014 07:21:07.629046   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:09.687162   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:09.687420   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:09.687490   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:12.104556   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:12.104556   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:12.110165   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:21:12.110572   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:21:12.110572   13076 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 07:21:12.241524   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728915672.238854784
	
	I1014 07:21:12.241655   13076 fix.go:216] guest clock: 1728915672.238854784
	I1014 07:21:12.241726   13076 fix.go:229] Guest: 2024-10-14 07:21:12.238854784 -0700 PDT Remote: 2024-10-14 07:21:07.6290467 -0700 PDT m=+127.381500801 (delta=4.609808084s)
	I1014 07:21:12.241841   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:14.299338   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:14.299599   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:14.299599   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:16.767477   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:16.767665   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:16.775565   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:21:16.775565   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:21:16.775565   13076 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1728915672
	I1014 07:21:16.921038   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 14 14:21:12 UTC 2024
	
	I1014 07:21:16.921096   13076 fix.go:236] clock set: Mon Oct 14 14:21:12 UTC 2024
	 (err=<nil>)
	I1014 07:21:16.921169   13076 start.go:83] releasing machines lock for "ha-132600", held for 2m11.2030498s
	I1014 07:21:16.921319   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:18.988904   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:18.989898   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:18.989947   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:21.410167   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:21.410403   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:21.414482   13076 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1014 07:21:21.414683   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:21.423844   13076 ssh_runner.go:195] Run: cat /version.json
	I1014 07:21:21.423844   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:23.554001   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:23.554206   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:23.554362   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:23.565882   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:23.565882   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:23.565882   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:26.092249   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:26.092249   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:26.092379   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:21:26.119019   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:26.119086   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:26.119215   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:21:26.179573   13076 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.764971s)
	W1014 07:21:26.179670   13076 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1014 07:21:26.212879   13076 ssh_runner.go:235] Completed: cat /version.json: (4.7890281s)
	I1014 07:21:26.223815   13076 ssh_runner.go:195] Run: systemctl --version
	I1014 07:21:26.243251   13076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 07:21:26.251321   13076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 07:21:26.262431   13076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 07:21:26.290027   13076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 07:21:26.290027   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:21:26.290027   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1014 07:21:26.296743   13076 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1014 07:21:26.296743   13076 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1014 07:21:26.340326   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1014 07:21:26.370362   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1014 07:21:26.391878   13076 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 07:21:26.402847   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 07:21:26.433241   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:21:26.462619   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 07:21:26.491735   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:21:26.520404   13076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 07:21:26.550615   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 07:21:26.580278   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 07:21:26.608564   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 07:21:26.636849   13076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 07:21:26.656448   13076 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 07:21:26.667504   13076 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 07:21:26.700184   13076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 07:21:26.726834   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:26.916163   13076 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 07:21:26.953200   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:21:26.967627   13076 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1014 07:21:27.005080   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:21:27.039193   13076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 07:21:27.083701   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:21:27.118670   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:21:27.153508   13076 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 07:21:27.209689   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:21:27.230765   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:21:27.273213   13076 ssh_runner.go:195] Run: which cri-dockerd
	I1014 07:21:27.291783   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1014 07:21:27.308581   13076 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1014 07:21:27.351709   13076 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1014 07:21:27.539571   13076 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1014 07:21:27.725567   13076 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1014 07:21:27.725567   13076 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1014 07:21:27.768723   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:27.951142   13076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:21:30.486027   13076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5347969s)
	I1014 07:21:30.497722   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1014 07:21:30.531680   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 07:21:30.563742   13076 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1014 07:21:30.767218   13076 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1014 07:21:30.967093   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:31.191192   13076 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1014 07:21:31.229757   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 07:21:31.267426   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:31.462408   13076 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1014 07:21:31.569509   13076 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1014 07:21:31.581151   13076 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1014 07:21:31.591322   13076 start.go:563] Will wait 60s for crictl version
	I1014 07:21:31.604845   13076 ssh_runner.go:195] Run: which crictl
	I1014 07:21:31.622874   13076 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 07:21:31.680621   13076 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1014 07:21:31.689423   13076 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 07:21:31.732594   13076 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 07:21:31.766375   13076 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1014 07:21:31.766375   13076 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:08:65:4d Flags:up|broadcast|multicast|running}
	I1014 07:21:31.773955   13076 ip.go:214] interface addr: fe80::b548:ff79:d464:75b8/64
	I1014 07:21:31.773955   13076 ip.go:214] interface addr: 172.20.96.1/20
	I1014 07:21:31.784171   13076 ssh_runner.go:195] Run: grep 172.20.96.1	host.minikube.internal$ /etc/hosts
	I1014 07:21:31.791192   13076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 07:21:31.825649   13076 kubeadm.go:883] updating cluster {Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 07:21:31.825649   13076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:21:31.834667   13076 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 07:21:31.867707   13076 docker.go:689] Got preloaded images: 
	I1014 07:21:31.867707   13076 docker.go:695] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I1014 07:21:31.880238   13076 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1014 07:21:31.908893   13076 ssh_runner.go:195] Run: which lz4
	I1014 07:21:31.914374   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1014 07:21:31.924715   13076 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 07:21:31.930846   13076 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 07:21:31.931075   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I1014 07:21:33.829320   13076 docker.go:653] duration metric: took 1.9148657s to copy over tarball
	I1014 07:21:33.840382   13076 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 07:21:42.539043   13076 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6986506s)
	I1014 07:21:42.539174   13076 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 07:21:42.611065   13076 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1014 07:21:42.636639   13076 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I1014 07:21:42.681041   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:42.879066   13076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:21:46.161713   13076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.2824935s)
	I1014 07:21:46.173131   13076 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 07:21:46.198645   13076 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1014 07:21:46.198645   13076 cache_images.go:84] Images are preloaded, skipping loading
	I1014 07:21:46.198645   13076 kubeadm.go:934] updating node { 172.20.108.120 8443 v1.31.1 docker true true} ...
	I1014 07:21:46.198645   13076 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-132600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.108.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 07:21:46.209892   13076 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1014 07:21:46.283733   13076 cni.go:84] Creating CNI manager for ""
	I1014 07:21:46.283861   13076 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 07:21:46.283932   13076 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 07:21:46.284000   13076 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.108.120 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-132600 NodeName:ha-132600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.108.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.108.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 07:21:46.284290   13076 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.108.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-132600"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.20.108.120"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.108.120"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 07:21:46.284367   13076 kube-vip.go:115] generating kube-vip config ...
	I1014 07:21:46.295693   13076 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1014 07:21:46.324017   13076 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1014 07:21:46.324230   13076 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.20.111.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1014 07:21:46.334184   13076 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 07:21:46.349321   13076 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 07:21:46.360771   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 07:21:46.379224   13076 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I1014 07:21:46.412080   13076 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 07:21:46.441691   13076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1014 07:21:46.479025   13076 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1014 07:21:46.519461   13076 ssh_runner.go:195] Run: grep 172.20.111.254	control-plane.minikube.internal$ /etc/hosts
	I1014 07:21:46.525021   13076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.111.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 07:21:46.554943   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:46.730466   13076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 07:21:46.761153   13076 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600 for IP: 172.20.108.120
	I1014 07:21:46.761153   13076 certs.go:194] generating shared ca certs ...
	I1014 07:21:46.761153   13076 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:46.761825   13076 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I1014 07:21:46.762680   13076 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I1014 07:21:46.762846   13076 certs.go:256] generating profile certs ...
	I1014 07:21:46.763605   13076 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.key
	I1014 07:21:46.763605   13076 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.crt with IP's: []
	I1014 07:21:46.955865   13076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.crt ...
	I1014 07:21:46.955865   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.crt: {Name:mk4a5e51a4b9d62279c7ef4ef47716bcdf88d704 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:46.957857   13076 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.key ...
	I1014 07:21:46.957857   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.key: {Name:mkc80e99cfe4d8d17198a87fb188cfa1a72d727d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:46.958881   13076 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61
	I1014 07:21:46.958881   13076 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.108.120 172.20.111.254]
	I1014 07:21:47.375660   13076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61 ...
	I1014 07:21:47.375660   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61: {Name:mke858ab0ac103663123708f9814cfdffe03211e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.377216   13076 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61 ...
	I1014 07:21:47.377216   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61: {Name:mk972fca892a193d6b4e84d42e27ed86ce9f0457 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.377811   13076 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt
	I1014 07:21:47.391806   13076 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key
	I1014 07:21:47.392823   13076 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key
	I1014 07:21:47.392823   13076 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt with IP's: []
	I1014 07:21:47.675751   13076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt ...
	I1014 07:21:47.675751   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt: {Name:mk5a9f1239213dd0a0f36b4ee41d5ac56cbd79c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.677918   13076 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key ...
	I1014 07:21:47.677918   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key: {Name:mkb931094180fef1516b0d43acb6430ecbcbb3fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 07:21:47.679943   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 07:21:47.679943   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 07:21:47.679943   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 07:21:47.691415   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 07:21:47.693468   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem (1338 bytes)
	W1014 07:21:47.693468   13076 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936_empty.pem, impossibly tiny 0 bytes
	I1014 07:21:47.694004   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1014 07:21:47.694400   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1014 07:21:47.694671   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1014 07:21:47.695091   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1014 07:21:47.695960   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem (1708 bytes)
	I1014 07:21:47.696145   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem -> /usr/share/ca-certificates/936.pem
	I1014 07:21:47.696401   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /usr/share/ca-certificates/9362.pem
	I1014 07:21:47.696580   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:47.697580   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 07:21:47.742705   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 07:21:47.790988   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 07:21:47.837975   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 07:21:47.884872   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 07:21:47.936878   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 07:21:47.985330   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 07:21:48.029181   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 07:21:48.078824   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem --> /usr/share/ca-certificates/936.pem (1338 bytes)
	I1014 07:21:48.127907   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /usr/share/ca-certificates/9362.pem (1708 bytes)
	I1014 07:21:48.174542   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 07:21:48.218635   13076 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 07:21:48.264009   13076 ssh_runner.go:195] Run: openssl version
	I1014 07:21:48.282939   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 07:21:48.310256   13076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:48.318050   13076 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:44 /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:48.328768   13076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:48.348090   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 07:21:48.376484   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936.pem && ln -fs /usr/share/ca-certificates/936.pem /etc/ssl/certs/936.pem"
	I1014 07:21:48.408494   13076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936.pem
	I1014 07:21:48.416513   13076 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 14:00 /usr/share/ca-certificates/936.pem
	I1014 07:21:48.427815   13076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936.pem
	I1014 07:21:48.447033   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/936.pem /etc/ssl/certs/51391683.0"
	I1014 07:21:48.474293   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9362.pem && ln -fs /usr/share/ca-certificates/9362.pem /etc/ssl/certs/9362.pem"
	I1014 07:21:48.502293   13076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9362.pem
	I1014 07:21:48.509688   13076 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 14:00 /usr/share/ca-certificates/9362.pem
	I1014 07:21:48.519811   13076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9362.pem
	I1014 07:21:48.540403   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9362.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 07:21:48.569007   13076 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 07:21:48.575758   13076 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 07:21:48.576118   13076 kubeadm.go:392] StartCluster: {Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clu
sterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:21:48.584978   13076 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1014 07:21:48.619430   13076 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 07:21:48.647606   13076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 07:21:48.676291   13076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 07:21:48.693679   13076 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 07:21:48.693679   13076 kubeadm.go:157] found existing configuration files:
	
	I1014 07:21:48.703613   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 07:21:48.722067   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 07:21:48.732618   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 07:21:48.759836   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 07:21:48.777622   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 07:21:48.789748   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 07:21:48.818512   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 07:21:48.837396   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 07:21:48.848696   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 07:21:48.876926   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 07:21:48.893530   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 07:21:48.904372   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 07:21:48.920210   13076 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 07:21:49.392185   13076 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 07:22:04.140256   13076 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 07:22:04.140412   13076 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 07:22:04.140501   13076 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 07:22:04.140848   13076 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 07:22:04.141206   13076 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 07:22:04.141311   13076 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 07:22:04.144180   13076 out.go:235]   - Generating certificates and keys ...
	I1014 07:22:04.144373   13076 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 07:22:04.144575   13076 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 07:22:04.144939   13076 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 07:22:04.145172   13076 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1014 07:22:04.145315   13076 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1014 07:22:04.145521   13076 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1014 07:22:04.145845   13076 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1014 07:22:04.146212   13076 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-132600 localhost] and IPs [172.20.108.120 127.0.0.1 ::1]
	I1014 07:22:04.146365   13076 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1014 07:22:04.146614   13076 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-132600 localhost] and IPs [172.20.108.120 127.0.0.1 ::1]
	I1014 07:22:04.147003   13076 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 07:22:04.147200   13076 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 07:22:04.147236   13076 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1014 07:22:04.147236   13076 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 07:22:04.147236   13076 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 07:22:04.147236   13076 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 07:22:04.147954   13076 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 07:22:04.147987   13076 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 07:22:04.147987   13076 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 07:22:04.147987   13076 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 07:22:04.148785   13076 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 07:22:04.151235   13076 out.go:235]   - Booting up control plane ...
	I1014 07:22:04.151235   13076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 07:22:04.151819   13076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 07:22:04.152046   13076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 07:22:04.152333   13076 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 07:22:04.153426   13076 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 523.263843ms
	I1014 07:22:04.153534   13076 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 07:22:04.153534   13076 kubeadm.go:310] [api-check] The API server is healthy after 9.176461865s
	I1014 07:22:04.153534   13076 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 07:22:04.154243   13076 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 07:22:04.154243   13076 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 07:22:04.154779   13076 kubeadm.go:310] [mark-control-plane] Marking the node ha-132600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 07:22:04.154779   13076 kubeadm.go:310] [bootstrap-token] Using token: rm3d8a.inqsemtt5b5l5x2e
	I1014 07:22:04.157926   13076 out.go:235]   - Configuring RBAC rules ...
	I1014 07:22:04.158760   13076 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 07:22:04.158928   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 07:22:04.158928   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 07:22:04.159656   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 07:22:04.159851   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 07:22:04.160089   13076 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 07:22:04.160270   13076 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 07:22:04.160529   13076 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 07:22:04.160529   13076 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 07:22:04.160529   13076 kubeadm.go:310] 
	I1014 07:22:04.160529   13076 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 07:22:04.160529   13076 kubeadm.go:310] 
	I1014 07:22:04.160529   13076 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 07:22:04.160529   13076 kubeadm.go:310] 
	I1014 07:22:04.161100   13076 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 07:22:04.161186   13076 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 07:22:04.161352   13076 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 07:22:04.161352   13076 kubeadm.go:310] 
	I1014 07:22:04.161534   13076 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 07:22:04.161534   13076 kubeadm.go:310] 
	I1014 07:22:04.161706   13076 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 07:22:04.161706   13076 kubeadm.go:310] 
	I1014 07:22:04.161950   13076 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 07:22:04.161950   13076 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 07:22:04.161950   13076 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 07:22:04.161950   13076 kubeadm.go:310] 
	I1014 07:22:04.162515   13076 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 07:22:04.162607   13076 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 07:22:04.162607   13076 kubeadm.go:310] 
	I1014 07:22:04.162607   13076 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rm3d8a.inqsemtt5b5l5x2e \
	I1014 07:22:04.162607   13076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f876f951efbe7f1ed47ca578ee0d33e6ce8fe69c1a008a8154ee00f56ac84e46 \
	I1014 07:22:04.163162   13076 kubeadm.go:310] 	--control-plane 
	I1014 07:22:04.163261   13076 kubeadm.go:310] 
	I1014 07:22:04.163340   13076 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 07:22:04.163340   13076 kubeadm.go:310] 
	I1014 07:22:04.163570   13076 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rm3d8a.inqsemtt5b5l5x2e \
	I1014 07:22:04.163570   13076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f876f951efbe7f1ed47ca578ee0d33e6ce8fe69c1a008a8154ee00f56ac84e46 
	I1014 07:22:04.163570   13076 cni.go:84] Creating CNI manager for ""
	I1014 07:22:04.163570   13076 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 07:22:04.167124   13076 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1014 07:22:04.179134   13076 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 07:22:04.187701   13076 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1014 07:22:04.187701   13076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1014 07:22:04.233053   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1014 07:22:04.912894   13076 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 07:22:04.925574   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-132600 minikube.k8s.io/updated_at=2024_10_14T07_22_04_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=ha-132600 minikube.k8s.io/primary=true
	I1014 07:22:04.927795   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:04.958241   13076 ops.go:34] apiserver oom_adj: -16
	I1014 07:22:05.168034   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:05.667048   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:06.166157   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:06.667561   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:07.167283   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:07.305312   13076 kubeadm.go:1113] duration metric: took 2.3924152s to wait for elevateKubeSystemPrivileges
	I1014 07:22:07.306284   13076 kubeadm.go:394] duration metric: took 18.730142s to StartCluster
	I1014 07:22:07.306284   13076 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:22:07.306284   13076 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:22:07.308077   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:22:07.309777   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 07:22:07.309777   13076 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:22:07.309953   13076 start.go:241] waiting for startup goroutines ...
	I1014 07:22:07.309868   13076 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 07:22:07.310143   13076 addons.go:69] Setting storage-provisioner=true in profile "ha-132600"
	I1014 07:22:07.310235   13076 addons.go:234] Setting addon storage-provisioner=true in "ha-132600"
	I1014 07:22:07.310143   13076 addons.go:69] Setting default-storageclass=true in profile "ha-132600"
	I1014 07:22:07.310402   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:22:07.310402   13076 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-132600"
	I1014 07:22:07.310402   13076 host.go:66] Checking if "ha-132600" exists ...
	I1014 07:22:07.311457   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:07.311926   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:07.495748   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.96.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 07:22:08.130501   13076 start.go:971] {"host.minikube.internal": 172.20.96.1} host record injected into CoreDNS's ConfigMap
	I1014 07:22:09.594501   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:09.594501   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:09.594786   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:09.594878   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:09.595740   13076 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:22:09.596691   13076 kapi.go:59] client config for ha-132600: &rest.Config{Host:"https://172.20.111.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-132600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-132600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2926ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 07:22:09.597939   13076 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:22:09.598267   13076 cert_rotation.go:140] Starting client certificate rotation controller
	I1014 07:22:09.598703   13076 addons.go:234] Setting addon default-storageclass=true in "ha-132600"
	I1014 07:22:09.598956   13076 host.go:66] Checking if "ha-132600" exists ...
	I1014 07:22:09.600143   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:09.600143   13076 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 07:22:09.600247   13076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 07:22:09.600352   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:11.901872   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:11.901872   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:11.902855   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:22:11.913525   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:11.914554   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:11.914725   13076 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 07:22:11.914841   13076 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 07:22:11.915002   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:14.163865   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:14.163865   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:14.163865   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:22:14.597752   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:22:14.597752   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:14.598273   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:22:14.755906   13076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 07:22:16.766331   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:22:16.766798   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:16.767079   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:22:16.907784   13076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 07:22:17.107355   13076 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 07:22:17.107355   13076 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 07:22:17.108322   13076 round_trippers.go:463] GET https://172.20.111.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1014 07:22:17.108322   13076 round_trippers.go:469] Request Headers:
	I1014 07:22:17.108322   13076 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:22:17.108322   13076 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:22:17.120122   13076 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1014 07:22:17.120927   13076 round_trippers.go:463] PUT https://172.20.111.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1014 07:22:17.121006   13076 round_trippers.go:469] Request Headers:
	I1014 07:22:17.121006   13076 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:22:17.121006   13076 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:22:17.121089   13076 round_trippers.go:473]     Content-Type: application/json
	I1014 07:22:17.124876   13076 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 07:22:17.128782   13076 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1014 07:22:17.133792   13076 addons.go:510] duration metric: took 9.8239112s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1014 07:22:17.133792   13076 start.go:246] waiting for cluster config update ...
	I1014 07:22:17.133792   13076 start.go:255] writing updated cluster config ...
	I1014 07:22:17.140793   13076 out.go:201] 
	I1014 07:22:17.150794   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:22:17.151784   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:22:17.157758   13076 out.go:177] * Starting "ha-132600-m02" control-plane node in "ha-132600" cluster
	I1014 07:22:17.160381   13076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:22:17.160381   13076 cache.go:56] Caching tarball of preloaded images
	I1014 07:22:17.160381   13076 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1014 07:22:17.160381   13076 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:22:17.160381   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:22:17.168672   13076 start.go:360] acquireMachinesLock for ha-132600-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:22:17.169194   13076 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-132600-m02"
	I1014 07:22:17.169334   13076 start.go:93] Provisioning new machine with config: &{Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:22:17.169334   13076 start.go:125] createHost starting for "m02" (driver="hyperv")
	I1014 07:22:17.171957   13076 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:22:17.171957   13076 start.go:159] libmachine.API.Create for "ha-132600" (driver="hyperv")
	I1014 07:22:17.171957   13076 client.go:168] LocalClient.Create starting
	I1014 07:22:17.171957   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:22:17.173802   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:22:17.173802   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1014 07:22:19.094383   13076 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1014 07:22:19.094383   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:19.094970   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1014 07:22:20.808796   13076 main.go:141] libmachine: [stdout =====>] : False
	
	I1014 07:22:20.808796   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:20.808897   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:22:22.292959   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:22:22.292959   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:22.293074   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:22:25.811891   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:22:25.811891   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:25.813583   13076 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 07:22:26.334218   13076 main.go:141] libmachine: Creating SSH key...
	I1014 07:22:26.579833   13076 main.go:141] libmachine: Creating VM...
	I1014 07:22:26.580305   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:22:29.402237   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:22:29.402237   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:29.402237   13076 main.go:141] libmachine: Using switch "Default Switch"
	I1014 07:22:29.402237   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:22:31.181025   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:22:31.181025   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:31.181025   13076 main.go:141] libmachine: Creating VHD
	I1014 07:22:31.181025   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I1014 07:22:34.986705   13076 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 9BEBB030-6569-4A5D-A43D-C976E242B24D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1014 07:22:34.986953   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:34.986953   13076 main.go:141] libmachine: Writing magic tar header
	I1014 07:22:34.986953   13076 main.go:141] libmachine: Writing SSH key tar header
	I1014 07:22:34.998864   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I1014 07:22:38.117497   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:38.117497   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:38.117497   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\disk.vhd' -SizeBytes 20000MB
	I1014 07:22:40.750048   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:40.750048   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:40.750048   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-132600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1014 07:22:44.293371   13076 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-132600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1014 07:22:44.294009   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:44.294158   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-132600-m02 -DynamicMemoryEnabled $false
	I1014 07:22:46.495846   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:46.496106   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:46.496211   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-132600-m02 -Count 2
	I1014 07:22:48.637702   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:48.637702   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:48.637702   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-132600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\boot2docker.iso'
	I1014 07:22:51.128606   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:51.128606   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:51.128606   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-132600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\disk.vhd'
	I1014 07:22:53.765202   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:53.765775   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:53.765775   13076 main.go:141] libmachine: Starting VM...
	I1014 07:22:53.765847   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-132600-m02
	I1014 07:22:56.961396   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:56.961396   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:56.962040   13076 main.go:141] libmachine: Waiting for host to start...
	I1014 07:22:56.962040   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:22:59.251592   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:59.251592   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:59.251838   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:01.758067   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:01.758159   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:02.758293   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:04.921490   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:04.921565   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:04.921802   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:07.441595   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:07.441595   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:08.443053   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:10.659721   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:10.659721   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:10.660418   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:13.126618   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:13.126952   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:14.127183   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:16.307414   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:16.307414   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:16.307500   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:18.748902   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:18.749678   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:19.749962   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:21.893323   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:21.893323   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:21.893323   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:24.464351   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:24.465392   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:24.465471   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:26.567626   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:26.567626   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:26.567626   13076 machine.go:93] provisionDockerMachine start ...
	I1014 07:23:26.568400   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:28.681591   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:28.681591   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:28.682618   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:31.178308   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:31.178308   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:31.192309   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:31.207523   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:31.208024   13076 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 07:23:31.337861   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 07:23:31.337861   13076 buildroot.go:166] provisioning hostname "ha-132600-m02"
	I1014 07:23:31.337941   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:33.402839   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:33.402839   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:33.403340   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:35.894312   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:35.894312   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:35.901210   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:35.901699   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:35.901823   13076 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-132600-m02 && echo "ha-132600-m02" | sudo tee /etc/hostname
	I1014 07:23:36.071040   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-132600-m02
	
	I1014 07:23:36.071040   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:38.150707   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:38.151729   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:38.151729   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:40.639242   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:40.639242   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:40.644598   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:40.645419   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:40.645490   13076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-132600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-132600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-132600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 07:23:40.801974   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 07:23:40.801974   13076 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1014 07:23:40.801974   13076 buildroot.go:174] setting up certificates
	I1014 07:23:40.801974   13076 provision.go:84] configureAuth start
	I1014 07:23:40.801974   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:42.869101   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:42.869101   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:42.869101   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:45.382211   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:45.382688   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:45.382688   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:47.501182   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:47.501778   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:47.501832   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:50.008838   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:50.008838   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:50.009193   13076 provision.go:143] copyHostCerts
	I1014 07:23:50.009346   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1014 07:23:50.009346   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1014 07:23:50.009346   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1014 07:23:50.010015   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1014 07:23:50.011395   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1014 07:23:50.011967   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1014 07:23:50.011967   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1014 07:23:50.011967   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1014 07:23:50.013212   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1014 07:23:50.014109   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1014 07:23:50.014109   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1014 07:23:50.014507   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I1014 07:23:50.015649   13076 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-132600-m02 san=[127.0.0.1 172.20.111.83 ha-132600-m02 localhost minikube]
	I1014 07:23:50.130152   13076 provision.go:177] copyRemoteCerts
	I1014 07:23:50.140319   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 07:23:50.140319   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:52.242533   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:52.242533   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:52.243067   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:54.757999   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:54.758132   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:54.758132   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:23:54.867658   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7273333s)
	I1014 07:23:54.867658   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1014 07:23:54.867658   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 07:23:54.915126   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1014 07:23:54.915808   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 07:23:54.962775   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1014 07:23:54.963449   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 07:23:55.012733   13076 provision.go:87] duration metric: took 14.2107408s to configureAuth
	I1014 07:23:55.012733   13076 buildroot.go:189] setting minikube options for container-runtime
	I1014 07:23:55.013734   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:23:55.013734   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:57.174688   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:57.174688   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:57.175736   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:59.668037   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:59.668597   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:59.675074   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:59.675608   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:59.675873   13076 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1014 07:23:59.826626   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1014 07:23:59.826626   13076 buildroot.go:70] root file system type: tmpfs
	I1014 07:23:59.826926   13076 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1014 07:23:59.827038   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:01.963890   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:01.964568   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:01.964568   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:04.515125   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:04.515943   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:04.521824   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:04.522248   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:04.522248   13076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.20.108.120"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1014 07:24:04.691427   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.20.108.120
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1014 07:24:04.692090   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:06.819823   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:06.820339   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:06.820339   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:09.321630   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:09.321630   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:09.326748   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:09.327748   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:09.327808   13076 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1014 07:24:11.561493   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1014 07:24:11.561493   13076 machine.go:96] duration metric: took 44.993809s to provisionDockerMachine
	I1014 07:24:11.561493   13076 client.go:171] duration metric: took 1m54.3893961s to LocalClient.Create
	I1014 07:24:11.561493   13076 start.go:167] duration metric: took 1m54.3893961s to libmachine.API.Create "ha-132600"
	I1014 07:24:11.561493   13076 start.go:293] postStartSetup for "ha-132600-m02" (driver="hyperv")
	I1014 07:24:11.561493   13076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 07:24:11.572490   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 07:24:11.572490   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:13.652544   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:13.653401   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:13.653599   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:16.188638   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:16.188638   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:16.189304   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:24:16.298812   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7262867s)
	I1014 07:24:16.310231   13076 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 07:24:16.316927   13076 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 07:24:16.316927   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1014 07:24:16.317465   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1014 07:24:16.318548   13076 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> 9362.pem in /etc/ssl/certs
	I1014 07:24:16.318548   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /etc/ssl/certs/9362.pem
	I1014 07:24:16.332285   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 07:24:16.351469   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /etc/ssl/certs/9362.pem (1708 bytes)
	I1014 07:24:16.401820   13076 start.go:296] duration metric: took 4.8403207s for postStartSetup
	I1014 07:24:16.404096   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:18.500218   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:18.501234   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:18.501234   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:20.974982   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:20.975125   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:20.975125   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:24:20.978124   13076 start.go:128] duration metric: took 2m3.8086365s to createHost
	I1014 07:24:20.978209   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:23.112937   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:23.113001   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:23.113001   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:25.636675   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:25.637206   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:25.641953   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:25.642670   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:25.642670   13076 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 07:24:25.768945   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728915865.766702127
	
	I1014 07:24:25.769174   13076 fix.go:216] guest clock: 1728915865.766702127
	I1014 07:24:25.769174   13076 fix.go:229] Guest: 2024-10-14 07:24:25.766702127 -0700 PDT Remote: 2024-10-14 07:24:20.978124 -0700 PDT m=+320.730336401 (delta=4.788578127s)
	I1014 07:24:25.769174   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:27.845838   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:27.845838   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:27.845838   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:30.364175   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:30.364263   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:30.369382   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:30.369967   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:30.370049   13076 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1728915865
	I1014 07:24:30.508807   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 14 14:24:25 UTC 2024
	
	I1014 07:24:30.508807   13076 fix.go:236] clock set: Mon Oct 14 14:24:25 UTC 2024
	 (err=<nil>)
	I1014 07:24:30.508807   13076 start.go:83] releasing machines lock for "ha-132600-m02", held for 2m13.3394475s
	I1014 07:24:30.508807   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:32.570516   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:32.570516   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:32.570516   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:35.081338   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:35.081338   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:35.084330   13076 out.go:177] * Found network options:
	I1014 07:24:35.087197   13076 out.go:177]   - NO_PROXY=172.20.108.120
	W1014 07:24:35.089531   13076 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 07:24:35.091879   13076 out.go:177]   - NO_PROXY=172.20.108.120
	W1014 07:24:35.094409   13076 proxy.go:119] fail to check proxy env: Error ip not in block
	W1014 07:24:35.095622   13076 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 07:24:35.098960   13076 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1014 07:24:35.099124   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:35.109488   13076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1014 07:24:35.109488   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:37.293745   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:37.293818   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:37.293870   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:37.344951   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:37.344951   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:37.344951   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:39.933767   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:39.934139   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:39.934445   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:24:39.999458   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:39.999458   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:39.999458   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:24:40.039802   13076 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.930308s)
	W1014 07:24:40.039802   13076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 07:24:40.051889   13076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 07:24:40.057601   13076 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9585844s)
	W1014 07:24:40.057601   13076 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1014 07:24:40.090774   13076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 07:24:40.090774   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:24:40.091315   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:24:40.139757   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1014 07:24:40.172033   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1014 07:24:40.178804   13076 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1014 07:24:40.178804   13076 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1014 07:24:40.194040   13076 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 07:24:40.209169   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 07:24:40.241042   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:24:40.273436   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 07:24:40.304842   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:24:40.336705   13076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 07:24:40.371635   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 07:24:40.404378   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 07:24:40.435805   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 07:24:40.485319   13076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 07:24:40.505473   13076 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 07:24:40.516975   13076 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 07:24:40.550905   13076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 07:24:40.581632   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:24:40.785773   13076 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 07:24:40.819978   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:24:40.830992   13076 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1014 07:24:40.876801   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:24:40.913930   13076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 07:24:40.966444   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:24:41.006432   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:24:41.044531   13076 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 07:24:41.114617   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:24:41.139995   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:24:41.187779   13076 ssh_runner.go:195] Run: which cri-dockerd
	I1014 07:24:41.207367   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1014 07:24:41.226988   13076 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1014 07:24:41.272924   13076 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1014 07:24:41.470204   13076 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1014 07:24:41.650925   13076 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1014 07:24:41.650925   13076 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1014 07:24:41.694417   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:24:41.893295   13076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:25:43.008408   13076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.115034s)
	I1014 07:25:43.020423   13076 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1014 07:25:43.059257   13076 out.go:201] 
	W1014 07:25:43.062124   13076 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 14 14:24:09 ha-132600-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.938489944Z" level=info msg="Starting up"
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.941531731Z" level=info msg="containerd not running, starting managed containerd"
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.942638427Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	Oct 14 14:24:09 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:09.976195986Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.003978070Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004049170Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004119469Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004233069Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004318268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004457268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004664767Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004761167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004782067Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004795467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004887666Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.005331964Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008453752Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008545052Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008673551Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008759151Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008859951Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008988550Z" level=info msg="metadata content store policy set" policy=shared
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034095951Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034335550Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034407650Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034432250Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034449450Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034603849Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034945948Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035129847Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035263947Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035294447Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035312846Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035328546Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035343646Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035419846Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035443346Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035495946Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035526946Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035542746Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035565345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035583545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035598745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035630745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035648245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035663945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035689645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035709545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035731545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035750545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035764245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035777845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035792145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035809345Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035833944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035849444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035864444Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036025444Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036057644Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036074443Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036087843Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036099043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036115343Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036131843Z" level=info msg="NRI interface is disabled by configuration."
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036482842Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036737141Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036799541Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036822041Z" level=info msg="containerd successfully booted in 0.062283s"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.009541514Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.046546477Z" level=info msg="Loading containers: start."
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.205429391Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.426461627Z" level=info msg="Loading containers: done."
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448023414Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448125314Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448200613Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448511212Z" level=info msg="Daemon has completed initialization"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.556153647Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.556338446Z" level=info msg="API listen on [::]:2376"
	Oct 14 14:24:11 ha-132600-m02 systemd[1]: Started Docker Application Container Engine.
	Oct 14 14:24:41 ha-132600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.916201367Z" level=info msg="Processing signal 'terminated'"
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.918697964Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919058063Z" level=info msg="Daemon shutdown complete"
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919215663Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919252063Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: docker.service: Deactivated successfully.
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 14 14:24:42 ha-132600-m02 dockerd[1075]: time="2024-10-14T14:24:42.978494717Z" level=info msg="Starting up"
	Oct 14 14:25:43 ha-132600-m02 dockerd[1075]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1014 07:25:43.062124   13076 out.go:270] * 
	W1014 07:25:43.062827   13076 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:25:43.067027   13076 out.go:201] 
	
	
	==> Docker <==
	Oct 14 14:22:32 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:22:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d7137fb8ec569089599a03e11d18eda509d210d29fa54b5e6a0a8d7dd7a54e7f/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 14:22:32 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:22:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/90ffeb0ce1aafcb639832f7144bd2dbb0b5c9deb53555b49e386b3d3e96d8bc3/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 14:22:32 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:22:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7278ecf7cc9466e1fee2170d699055f6498d2ece8da31c4b3696ed85b3cd38cf/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922383735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922463635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922481235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922580735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.996609949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.996807948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.996907948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.998275745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.069443031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.070022429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.070369527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.076841098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675114485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675249284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675264584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675974482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:17 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:26:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/91751edf004eb5236aa2443a5cff30bdc82ba05d39a3583d9026a4f19faba52f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 14 14:26:19 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:26:19Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.456978196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.457173796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.457616296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.457980197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2b8d44d586369       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   91751edf004eb       busybox-7dff88458-kr92j
	5a6196684fc6e       c69fa2e9cbf5f                                                                                         16 minutes ago      Running             coredns                   0                   7278ecf7cc946       coredns-7c65d6cfc9-4qfrq
	81d6fdac8115f       6e38f40d628db                                                                                         16 minutes ago      Running             storage-provisioner       0                   90ffeb0ce1aaf       storage-provisioner
	52c3e5370a6c8       c69fa2e9cbf5f                                                                                         16 minutes ago      Running             coredns                   0                   d7137fb8ec569       coredns-7c65d6cfc9-zf6cd
	dae2c0aa67af3       kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387              16 minutes ago      Running             kindnet-cni               0                   277e64b59b8aa       kindnet-rkjqr
	4745a4b0dc379       60c005f310ff3                                                                                         17 minutes ago      Running             kube-proxy                0                   15f605d31df67       kube-proxy-zkbj8
	b08f7a3c2a5a6       ghcr.io/kube-vip/kube-vip@sha256:9e23baad11ae3e69d739430b9fdb60df22356b7da4b4f4e458fae0541619deb4     17 minutes ago      Running             kube-vip                  0                   46cfd7b278d40       kube-vip-ha-132600
	84fd76d493d80       2e96e5913fc06                                                                                         17 minutes ago      Running             etcd                      0                   d71566d00e2f9       etcd-ha-132600
	b661cb6713103       6bab7719df100                                                                                         17 minutes ago      Running             kube-apiserver            0                   cb18be6165830       kube-apiserver-ha-132600
	35c870864a800       9aa1fad941575                                                                                         17 minutes ago      Running             kube-scheduler            0                   36593363b631a       kube-scheduler-ha-132600
	4a8cce31aa3a2       175ffd71cce3d                                                                                         17 minutes ago      Running             kube-controller-manager   0                   f5fa69fe68b07       kube-controller-manager-ha-132600
	
	
	==> coredns [52c3e5370a6c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41885 - 43638 "HINFO IN 5147527986633681541.7511835962423692488. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.048565283s
	[INFO] 10.244.0.4:49801 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0004036s
	[INFO] 10.244.0.4:57121 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.178873632s
	[INFO] 10.244.0.4:54985 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000301s
	[INFO] 10.244.0.4:47603 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000273799s
	[INFO] 10.244.0.4:50030 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000276399s
	[INFO] 10.244.0.4:38124 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000439399s
	[INFO] 10.244.0.4:43764 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000347699s
	[INFO] 10.244.0.4:53583 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0003446s
	[INFO] 10.244.0.4:50771 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0002502s
	[INFO] 10.244.0.4:32805 - 5 "PTR IN 1.96.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0002399s
	
	
	==> coredns [5a6196684fc6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50464 - 22593 "HINFO IN 8090798371432773748.7872157757733337044. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.084180423s
	[INFO] 10.244.0.4:46373 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.004105792s
	[INFO] 10.244.0.4:41175 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.166814894s
	[INFO] 10.244.0.4:43040 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002867s
	[INFO] 10.244.0.4:33549 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.045685805s
	[INFO] 10.244.0.4:60793 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000884598s
	[INFO] 10.244.0.4:57558 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001339s
	[INFO] 10.244.0.4:33138 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.040147217s
	[INFO] 10.244.0.4:46303 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0002673s
	[INFO] 10.244.0.4:60761 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001771s
	[INFO] 10.244.0.4:55567 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000249599s
	
	
	==> describe nodes <==
	Name:               ha-132600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-132600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=ha-132600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_14T07_22_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 14:22:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-132600
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:39:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 14:36:53 +0000   Mon, 14 Oct 2024 14:22:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 14:36:53 +0000   Mon, 14 Oct 2024 14:22:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 14:36:53 +0000   Mon, 14 Oct 2024 14:22:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 14:36:53 +0000   Mon, 14 Oct 2024 14:22:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.108.120
	  Hostname:    ha-132600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 07bcc58a0c7e4f0eaae96253dcf0a4ae
	  System UUID:                e06d00fc-5ebc-1a49-bf31-4c6ce500fe9c
	  Boot ID:                    215d765d-07e3-4bb7-ba3c-58ab142d5afa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kr92j              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-4qfrq             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 coredns-7c65d6cfc9-zf6cd             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 etcd-ha-132600                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         17m
	  kube-system                 kindnet-rkjqr                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      17m
	  kube-system                 kube-apiserver-ha-132600             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-ha-132600    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-zkbj8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-ha-132600             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-vip-ha-132600                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 17m   kube-proxy       
	  Normal  Starting                 17m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m   kubelet          Node ha-132600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m   kubelet          Node ha-132600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m   kubelet          Node ha-132600 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           17m   node-controller  Node ha-132600 event: Registered Node ha-132600 in Controller
	  Normal  NodeReady                16m   kubelet          Node ha-132600 status is now: NodeReady
	
	
	==> dmesg <==
	[  +1.245700] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.050916] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +45.956887] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.179434] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[Oct14 14:21] systemd-fstab-generator[997]: Ignoring "noauto" option for root device
	[  +0.095957] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.527367] systemd-fstab-generator[1037]: Ignoring "noauto" option for root device
	[  +0.185829] systemd-fstab-generator[1049]: Ignoring "noauto" option for root device
	[  +0.220100] systemd-fstab-generator[1063]: Ignoring "noauto" option for root device
	[  +2.802946] systemd-fstab-generator[1277]: Ignoring "noauto" option for root device
	[  +0.209648] systemd-fstab-generator[1289]: Ignoring "noauto" option for root device
	[  +0.207153] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.288223] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[ +11.411364] systemd-fstab-generator[1416]: Ignoring "noauto" option for root device
	[  +0.107418] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.752049] systemd-fstab-generator[1674]: Ignoring "noauto" option for root device
	[  +6.264384] systemd-fstab-generator[1823]: Ignoring "noauto" option for root device
	[  +0.102334] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.637673] kauditd_printk_skb: 67 callbacks suppressed
	[Oct14 14:22] systemd-fstab-generator[2317]: Ignoring "noauto" option for root device
	[  +5.137329] kauditd_printk_skb: 17 callbacks suppressed
	[  +8.104433] kauditd_printk_skb: 29 callbacks suppressed
	[Oct14 14:26] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [84fd76d493d8] <==
	{"level":"info","ts":"2024-10-14T14:21:55.755114Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T14:21:55.755455Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"1c0f11edc616a87e","local-member-attributes":"{Name:ha-132600 ClientURLs:[https://172.20.108.120:2379]}","request-path":"/0/members/1c0f11edc616a87e/attributes","cluster-id":"60f23df2979e3d4a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-14T14:21:55.755734Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T14:21:55.762346Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T14:21:55.764750Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-14T14:21:55.773938Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-14T14:21:55.771033Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T14:21:55.781509Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T14:21:55.790642Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.20.108.120:2379"}
	{"level":"info","ts":"2024-10-14T14:21:55.783509Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-14T14:22:07.879957Z","caller":"traceutil/trace.go:171","msg":"trace[597013235] transaction","detail":"{read_only:false; response_revision:340; number_of_response:1; }","duration":"128.695256ms","start":"2024-10-14T14:22:07.751244Z","end":"2024-10-14T14:22:07.879940Z","steps":["trace[597013235] 'process raft request'  (duration: 56.790725ms)","trace[597013235] 'compare'  (duration: 71.246831ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-14T14:22:07.880058Z","caller":"traceutil/trace.go:171","msg":"trace[2064377417] transaction","detail":"{read_only:false; response_revision:343; number_of_response:1; }","duration":"126.232155ms","start":"2024-10-14T14:22:07.753813Z","end":"2024-10-14T14:22:07.880045Z","steps":["trace[2064377417] 'process raft request'  (duration: 125.814455ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:22:07.880130Z","caller":"traceutil/trace.go:171","msg":"trace[152598700] transaction","detail":"{read_only:false; response_revision:341; number_of_response:1; }","duration":"128.322756ms","start":"2024-10-14T14:22:07.751800Z","end":"2024-10-14T14:22:07.880123Z","steps":["trace[152598700] 'process raft request'  (duration: 127.754056ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:22:07.880149Z","caller":"traceutil/trace.go:171","msg":"trace[1241907766] transaction","detail":"{read_only:false; response_revision:342; number_of_response:1; }","duration":"128.197556ms","start":"2024-10-14T14:22:07.751946Z","end":"2024-10-14T14:22:07.880144Z","steps":["trace[1241907766] 'process raft request'  (duration: 127.649056ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:22:13.374435Z","caller":"traceutil/trace.go:171","msg":"trace[979670185] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"443.350864ms","start":"2024-10-14T14:22:12.931064Z","end":"2024-10-14T14:22:13.374415Z","steps":["trace[979670185] 'process raft request'  (duration: 443.018064ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T14:22:13.376146Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-14T14:22:12.931047Z","time spent":"444.536163ms","remote":"127.0.0.1:55470","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5580,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-ha-132600\" mod_revision:367 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-ha-132600\" value_size:5531 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-ha-132600\" > >"}
	{"level":"info","ts":"2024-10-14T14:22:28.783322Z","caller":"traceutil/trace.go:171","msg":"trace[1840750524] transaction","detail":"{read_only:false; response_revision:403; number_of_response:1; }","duration":"157.257125ms","start":"2024-10-14T14:22:28.626047Z","end":"2024-10-14T14:22:28.783304Z","steps":["trace[1840750524] 'process raft request'  (duration: 157.088326ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:22:29.878049Z","caller":"traceutil/trace.go:171","msg":"trace[630262405] transaction","detail":"{read_only:false; response_revision:404; number_of_response:1; }","duration":"108.586835ms","start":"2024-10-14T14:22:29.769442Z","end":"2024-10-14T14:22:29.878029Z","steps":["trace[630262405] 'process raft request'  (duration: 107.997536ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:23:24.077614Z","caller":"traceutil/trace.go:171","msg":"trace[844030949] transaction","detail":"{read_only:false; response_revision:541; number_of_response:1; }","duration":"108.091259ms","start":"2024-10-14T14:23:23.969504Z","end":"2024-10-14T14:23:24.077595Z","steps":["trace[844030949] 'process raft request'  (duration: 107.87726ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:31:56.536139Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":948}
	{"level":"info","ts":"2024-10-14T14:31:56.621225Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":948,"took":"84.804509ms","hash":4240470983,"current-db-size-bytes":2416640,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2416640,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-10-14T14:31:56.621500Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4240470983,"revision":948,"compact-revision":-1}
	{"level":"info","ts":"2024-10-14T14:36:56.554994Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1485}
	{"level":"info","ts":"2024-10-14T14:36:56.566306Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1485,"took":"10.58958ms","hash":2881326002,"current-db-size-bytes":2416640,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1830912,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-10-14T14:36:56.566433Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2881326002,"revision":1485,"compact-revision":948}
	
	
	==> kernel <==
	 14:39:10 up 19 min,  0 users,  load average: 0.45, 0.36, 0.31
	Linux ha-132600 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [dae2c0aa67af] <==
	I1014 14:37:07.573144       1 main.go:300] handling current node
	I1014 14:37:17.564003       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:37:17.564067       1 main.go:300] handling current node
	I1014 14:37:27.563269       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:37:27.563423       1 main.go:300] handling current node
	I1014 14:37:37.564967       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:37:37.565195       1 main.go:300] handling current node
	I1014 14:37:47.567116       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:37:47.567209       1 main.go:300] handling current node
	I1014 14:37:57.565444       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:37:57.565503       1 main.go:300] handling current node
	I1014 14:38:07.568761       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:38:07.568963       1 main.go:300] handling current node
	I1014 14:38:17.563315       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:38:17.563401       1 main.go:300] handling current node
	I1014 14:38:27.568742       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:38:27.568934       1 main.go:300] handling current node
	I1014 14:38:37.568691       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:38:37.569346       1 main.go:300] handling current node
	I1014 14:38:47.570273       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:38:47.570385       1 main.go:300] handling current node
	I1014 14:38:57.563343       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:38:57.563396       1 main.go:300] handling current node
	I1014 14:39:07.569997       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:39:07.570146       1 main.go:300] handling current node
	
	
	==> kube-apiserver [b661cb671310] <==
	I1014 14:21:58.925388       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1014 14:21:58.925426       1 policy_source.go:224] refreshing policies
	I1014 14:21:58.928296       1 controller.go:615] quota admission added evaluator for: namespaces
	E1014 14:21:58.950810       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1014 14:21:59.174958       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 14:21:59.819127       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1014 14:21:59.829797       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1014 14:21:59.829835       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 14:22:00.942976       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 14:22:01.015766       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 14:22:01.150423       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1014 14:22:01.164901       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.20.108.120]
	I1014 14:22:01.165818       1 controller.go:615] quota admission added evaluator for: endpoints
	I1014 14:22:01.175247       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 14:22:01.835583       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1014 14:22:03.554010       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1014 14:22:03.589407       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1014 14:22:03.617368       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1014 14:22:07.346752       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1014 14:22:07.516653       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1014 14:38:03.250866       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57382: use of closed network connection
	E1014 14:38:04.469699       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57390: use of closed network connection
	E1014 14:38:05.596658       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57398: use of closed network connection
	E1014 14:38:40.228042       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57419: use of closed network connection
	E1014 14:38:50.673483       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57421: use of closed network connection
	
	
	==> kube-controller-manager [4a8cce31aa3a] <==
	I1014 14:22:07.963572       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="72.664531ms"
	I1014 14:22:07.964409       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="234.3µs"
	I1014 14:22:31.675332       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600"
	I1014 14:22:31.702335       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600"
	I1014 14:22:31.722232       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="290.099µs"
	I1014 14:22:31.739820       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="1.478096ms"
	I1014 14:22:31.765093       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="242.4µs"
	I1014 14:22:31.815158       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="67µs"
	I1014 14:22:31.836955       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1014 14:22:33.199416       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="65.9µs"
	I1014 14:22:34.284134       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="108.4µs"
	I1014 14:22:34.352001       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="27.147579ms"
	I1014 14:22:34.400543       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="48.068986ms"
	I1014 14:22:34.400805       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="105.799µs"
	I1014 14:22:34.401110       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="152.899µs"
	I1014 14:22:34.558208       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600"
	I1014 14:26:17.151439       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="87.464595ms"
	I1014 14:26:17.170321       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.788035ms"
	I1014 14:26:17.170417       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="37.299µs"
	I1014 14:26:17.175584       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="43.499µs"
	I1014 14:26:20.622046       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.44731ms"
	I1014 14:26:20.622317       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="202µs"
	I1014 14:26:40.517161       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600"
	I1014 14:31:47.122690       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600"
	I1014 14:36:53.521795       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600"
	
	
	==> kube-proxy [4745a4b0dc37] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1014 14:22:09.024513       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1014 14:22:09.047174       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.20.108.120"]
	E1014 14:22:09.047308       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 14:22:09.124346       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 14:22:09.124494       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 14:22:09.124529       1 server_linux.go:169] "Using iptables Proxier"
	I1014 14:22:09.128809       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 14:22:09.129721       1 server.go:483] "Version info" version="v1.31.1"
	I1014 14:22:09.129805       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 14:22:09.131652       1 config.go:199] "Starting service config controller"
	I1014 14:22:09.131988       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 14:22:09.132269       1 config.go:105] "Starting endpoint slice config controller"
	I1014 14:22:09.132619       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 14:22:09.133592       1 config.go:328] "Starting node config controller"
	I1014 14:22:09.135184       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 14:22:09.232621       1 shared_informer.go:320] Caches are synced for service config
	I1014 14:22:09.233933       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 14:22:09.236054       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [35c870864a80] <==
	W1014 14:21:59.944424       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1014 14:21:59.944452       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:21:59.979078       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1014 14:21:59.979365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.111250       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1014 14:22:00.111664       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.192015       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1014 14:22:00.192346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.196984       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1014 14:22:00.197112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.197208       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1014 14:22:00.197241       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.212172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1014 14:22:00.212214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.260144       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1014 14:22:00.261002       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.276235       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1014 14:22:00.276419       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.295031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1014 14:22:00.295892       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.311664       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1014 14:22:00.313212       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1014 14:22:00.354351       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1014 14:22:00.355204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 14:22:02.117335       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 14 14:35:03 ha-132600 kubelet[2324]: E1014 14:35:03.675096    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:35:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:35:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:35:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:35:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:36:03 ha-132600 kubelet[2324]: E1014 14:36:03.678469    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:36:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:36:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:36:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:36:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:37:03 ha-132600 kubelet[2324]: E1014 14:37:03.677822    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:37:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:37:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:37:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:37:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:38:03 ha-132600 kubelet[2324]: E1014 14:38:03.684487    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:38:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:38:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:38:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:38:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:39:03 ha-132600 kubelet[2324]: E1014 14:39:03.675462    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:39:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:39:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:39:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:39:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [81d6fdac8115] <==
	I1014 14:22:33.432711       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 14:22:33.504323       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 14:22:33.505471       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1014 14:22:33.522254       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 14:22:33.522619       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-132600_6ba40601-1895-44e0-aab6-02c0536a0b9c!
	I1014 14:22:33.527769       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5c7ed5ed-4913-4d2f-8634-767d8aa0727d", APIVersion:"v1", ResourceVersion:"436", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-132600_6ba40601-1895-44e0-aab6-02c0536a0b9c became leader
	I1014 14:22:33.636551       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-132600_6ba40601-1895-44e0-aab6-02c0536a0b9c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-132600 -n ha-132600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-132600 -n ha-132600: (11.6857187s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-132600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-8thz6 busybox-7dff88458-rng7p
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-132600 describe pod busybox-7dff88458-8thz6 busybox-7dff88458-rng7p
helpers_test.go:282: (dbg) kubectl --context ha-132600 describe pod busybox-7dff88458-8thz6 busybox-7dff88458-rng7p:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-8thz6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7r884 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-7r884:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m50s (x3 over 13m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-7dff88458-rng7p
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9hpj2 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-9hpj2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m50s (x3 over 13m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (44.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (272.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-132600 -v=7 --alsologtostderr
E1014 07:40:37.375938     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-132600 -v=7 --alsologtostderr: (3m24.7960724s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-132600 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-132600 status -v=7 --alsologtostderr: exit status 2 (34.6765243s)

                                                
                                                
-- stdout --
	ha-132600
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-132600-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-132600-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:42:48.620323   10872 out.go:345] Setting OutFile to fd 1384 ...
	I1014 07:42:48.622329   10872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:42:48.622329   10872 out.go:358] Setting ErrFile to fd 1172...
	I1014 07:42:48.622329   10872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:42:48.642386   10872 out.go:352] Setting JSON to false
	I1014 07:42:48.642920   10872 mustload.go:65] Loading cluster: ha-132600
	I1014 07:42:48.642920   10872 notify.go:220] Checking for updates...
	I1014 07:42:48.644218   10872 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:42:48.644218   10872 status.go:174] checking status of ha-132600 ...
	I1014 07:42:48.645155   10872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:42:50.729486   10872 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:42:50.729548   10872 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:42:50.729548   10872 status.go:371] ha-132600 host status = "Running" (err=<nil>)
	I1014 07:42:50.729548   10872 host.go:66] Checking if "ha-132600" exists ...
	I1014 07:42:50.730259   10872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:42:52.832253   10872 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:42:52.832364   10872 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:42:52.832364   10872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:42:55.335688   10872 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:42:55.335688   10872 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:42:55.336700   10872 host.go:66] Checking if "ha-132600" exists ...
	I1014 07:42:55.347796   10872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 07:42:55.347796   10872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:42:57.397658   10872 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:42:57.398057   10872 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:42:57.398057   10872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:42:59.902561   10872 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:42:59.902561   10872 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:42:59.902875   10872 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:43:00.007896   10872 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.660033s)
	I1014 07:43:00.020579   10872 ssh_runner.go:195] Run: systemctl --version
	I1014 07:43:00.041124   10872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 07:43:00.067756   10872 kubeconfig.go:125] found "ha-132600" server: "https://172.20.111.254:8443"
	I1014 07:43:00.067801   10872 api_server.go:166] Checking apiserver status ...
	I1014 07:43:00.078689   10872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 07:43:00.118958   10872 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2197/cgroup
	W1014 07:43:00.139842   10872 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2197/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1014 07:43:00.150847   10872 ssh_runner.go:195] Run: ls
	I1014 07:43:00.158133   10872 api_server.go:253] Checking apiserver healthz at https://172.20.111.254:8443/healthz ...
	I1014 07:43:00.165689   10872 api_server.go:279] https://172.20.111.254:8443/healthz returned 200:
	ok
	I1014 07:43:00.165689   10872 status.go:463] ha-132600 apiserver status = Running (err=<nil>)
	I1014 07:43:00.165740   10872 status.go:176] ha-132600 status: &{Name:ha-132600 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 07:43:00.165740   10872 status.go:174] checking status of ha-132600-m02 ...
	I1014 07:43:00.165979   10872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:43:02.231723   10872 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:43:02.231891   10872 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:43:02.231891   10872 status.go:371] ha-132600-m02 host status = "Running" (err=<nil>)
	I1014 07:43:02.231891   10872 host.go:66] Checking if "ha-132600-m02" exists ...
	I1014 07:43:02.232672   10872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:43:04.335807   10872 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:43:04.336316   10872 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:43:04.336316   10872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:43:06.847935   10872 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:43:06.848794   10872 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:43:06.848794   10872 host.go:66] Checking if "ha-132600-m02" exists ...
	I1014 07:43:06.861765   10872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 07:43:06.861765   10872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:43:08.938080   10872 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:43:08.938150   10872 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:43:08.938150   10872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:43:11.443513   10872 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:43:11.443737   10872 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:43:11.444101   10872 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:43:11.555247   10872 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.693475s)
	I1014 07:43:11.567481   10872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 07:43:11.592973   10872 kubeconfig.go:125] found "ha-132600" server: "https://172.20.111.254:8443"
	I1014 07:43:11.592973   10872 api_server.go:166] Checking apiserver status ...
	I1014 07:43:11.604146   10872 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1014 07:43:11.628454   10872 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1014 07:43:11.628454   10872 status.go:463] ha-132600-m02 apiserver status = Stopped (err=<nil>)
	I1014 07:43:11.628454   10872 status.go:176] ha-132600-m02 status: &{Name:ha-132600-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 07:43:11.628454   10872 status.go:174] checking status of ha-132600-m03 ...
	I1014 07:43:11.629300   10872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m03 ).state
	I1014 07:43:13.716202   10872 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:43:13.716202   10872 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:43:13.716202   10872 status.go:371] ha-132600-m03 host status = "Running" (err=<nil>)
	I1014 07:43:13.716202   10872 host.go:66] Checking if "ha-132600-m03" exists ...
	I1014 07:43:13.717160   10872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m03 ).state
	I1014 07:43:15.847772   10872 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:43:15.848817   10872 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:43:15.848866   10872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m03 ).networkadapters[0]).ipaddresses[0]
	I1014 07:43:18.361230   10872 main.go:141] libmachine: [stdout =====>] : 172.20.111.174
	
	I1014 07:43:18.361230   10872 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:43:18.361230   10872 host.go:66] Checking if "ha-132600-m03" exists ...
	I1014 07:43:18.375472   10872 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 07:43:18.375472   10872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m03 ).state
	I1014 07:43:20.487881   10872 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:43:20.488429   10872 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:43:20.488506   10872 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m03 ).networkadapters[0]).ipaddresses[0]
	I1014 07:43:22.989033   10872 main.go:141] libmachine: [stdout =====>] : 172.20.111.174
	
	I1014 07:43:22.989217   10872 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:43:22.989426   10872 sshutil.go:53] new ssh client: &{IP:172.20.111.174 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m03\id_rsa Username:docker}
	I1014 07:43:23.098330   10872 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.722852s)
	I1014 07:43:23.109481   10872 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 07:43:23.136900   10872 status.go:176] ha-132600-m03 status: &{Name:ha-132600-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:236: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-132600 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-132600 -n ha-132600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-132600 -n ha-132600: (11.7162696s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-132600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-132600 logs -n 25: (8.155724s)
helpers_test.go:252: TestMultiControlPlane/serial/AddWorkerNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:37 PDT | 14 Oct 24 07:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:37 PDT | 14 Oct 24 07:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-8thz6 --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | busybox-7dff88458-kr92j --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-rng7p --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-8thz6 --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | busybox-7dff88458-kr92j --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-rng7p --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-8thz6 -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | busybox-7dff88458-kr92j -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-rng7p -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-8thz6              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | busybox-7dff88458-kr92j              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-kr92j -- sh        |           |                   |         |                     |                     |
	|         | -c ping -c 1 172.20.96.1             |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-rng7p              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| node    | add -p ha-132600 -v=7                | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:39 PDT | 14 Oct 24 07:42 PDT |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 07:19:00
	Running on machine: minikube1
	Binary: Built with gc go1.23.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 07:19:00.342865   13076 out.go:345] Setting OutFile to fd 1228 ...
	I1014 07:19:00.344546   13076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:19:00.344546   13076 out.go:358] Setting ErrFile to fd 824...
	I1014 07:19:00.344546   13076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:19:00.368247   13076 out.go:352] Setting JSON to false
	I1014 07:19:00.372277   13076 start.go:129] hostinfo: {"hostname":"minikube1","uptime":101054,"bootTime":1728814485,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5011 Build 19045.5011","kernelVersion":"10.0.19045.5011 Build 19045.5011","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1014 07:19:00.372803   13076 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:19:00.379728   13076 out.go:177] * [ha-132600] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	I1014 07:19:00.386820   13076 notify.go:220] Checking for updates...
	I1014 07:19:00.389571   13076 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:19:00.392679   13076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:19:00.395167   13076 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1014 07:19:00.398285   13076 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:19:00.400520   13076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:19:00.404118   13076 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:19:05.654471   13076 out.go:177] * Using the hyperv driver based on user configuration
	I1014 07:19:05.658285   13076 start.go:297] selected driver: hyperv
	I1014 07:19:05.658285   13076 start.go:901] validating driver "hyperv" against <nil>
	I1014 07:19:05.658285   13076 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:19:05.705211   13076 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 07:19:05.706541   13076 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:19:05.706541   13076 cni.go:84] Creating CNI manager for ""
	I1014 07:19:05.706541   13076 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1014 07:19:05.706541   13076 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 07:19:05.707066   13076 start.go:340] cluster config:
	{Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:19:05.707207   13076 iso.go:125] acquiring lock: {Name:mkb71635bcbccba29dce9048ad4cc71430a7e577 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:19:05.711264   13076 out.go:177] * Starting "ha-132600" primary control-plane node in "ha-132600" cluster
	I1014 07:19:05.715097   13076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:19:05.715262   13076 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1014 07:19:05.715344   13076 cache.go:56] Caching tarball of preloaded images
	I1014 07:19:05.715452   13076 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1014 07:19:05.715452   13076 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:19:05.716553   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:19:05.716898   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json: {Name:mkbd11bf3f90adebf1f4630d2e8deaac328748d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:19:05.717886   13076 start.go:360] acquireMachinesLock for ha-132600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:19:05.717886   13076 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-132600"
	I1014 07:19:05.718562   13076 start.go:93] Provisioning new machine with config: &{Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:19:05.718562   13076 start.go:125] createHost starting for "" (driver="hyperv")
	I1014 07:19:05.721891   13076 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:19:05.722555   13076 start.go:159] libmachine.API.Create for "ha-132600" (driver="hyperv")
	I1014 07:19:05.722555   13076 client.go:168] LocalClient.Create starting
	I1014 07:19:05.722555   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I1014 07:19:05.723316   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:19:05.723316   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:19:05.723316   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I1014 07:19:05.723877   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:19:05.723955   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:19:05.723955   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1014 07:19:07.752927   13076 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1014 07:19:07.753576   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:07.753793   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1014 07:19:09.445838   13076 main.go:141] libmachine: [stdout =====>] : False
	
	I1014 07:19:09.445838   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:09.446315   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:19:10.901034   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:19:10.901034   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:10.901034   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:19:14.320078   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:19:14.320309   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:14.323253   13076 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 07:19:14.829443   13076 main.go:141] libmachine: Creating SSH key...
	I1014 07:19:14.988345   13076 main.go:141] libmachine: Creating VM...
	I1014 07:19:14.988345   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:19:17.790932   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:19:17.791005   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:17.791145   13076 main.go:141] libmachine: Using switch "Default Switch"
	I1014 07:19:17.791145   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:19:19.529861   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:19:19.529861   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:19.529861   13076 main.go:141] libmachine: Creating VHD
	I1014 07:19:19.530436   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\fixed.vhd' -SizeBytes 10MB -Fixed
	I1014 07:19:23.105904   13076 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 267BC11D-A195-4636-8AE9-EF1D7966C95C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1014 07:19:23.105987   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:23.106037   13076 main.go:141] libmachine: Writing magic tar header
	I1014 07:19:23.106037   13076 main.go:141] libmachine: Writing SSH key tar header
	I1014 07:19:23.118040   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\disk.vhd' -VHDType Dynamic -DeleteSource
	I1014 07:19:26.201609   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:26.201950   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:26.202023   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\disk.vhd' -SizeBytes 20000MB
	I1014 07:19:28.768368   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:28.768789   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:28.768969   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-132600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1014 07:19:32.226690   13076 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-132600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1014 07:19:32.226915   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:32.226915   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-132600 -DynamicMemoryEnabled $false
	I1014 07:19:34.390731   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:34.390837   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:34.391049   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-132600 -Count 2
	I1014 07:19:36.449382   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:36.449628   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:36.449628   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-132600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\boot2docker.iso'
	I1014 07:19:38.932909   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:38.932909   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:38.932909   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-132600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\disk.vhd'
	I1014 07:19:41.509229   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:41.509229   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:41.509229   13076 main.go:141] libmachine: Starting VM...
	I1014 07:19:41.509229   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-132600
	I1014 07:19:44.718680   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:44.718680   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:44.719475   13076 main.go:141] libmachine: Waiting for host to start...
	I1014 07:19:44.719770   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:19:46.954823   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:19:46.954823   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:46.955829   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:19:49.408227   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:49.408227   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:50.409369   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:19:52.561607   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:19:52.561912   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:52.561912   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:19:55.081136   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:55.081187   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:56.082401   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:19:58.271497   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:19:58.271497   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:58.272384   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:00.740863   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:20:00.741070   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:01.741536   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:03.902654   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:03.902721   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:03.902721   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:06.378064   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:20:06.378064   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:07.379104   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:09.537329   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:09.537329   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:09.537541   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:12.074320   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:12.074320   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:12.074834   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:14.171613   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:14.171613   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:14.171613   13076 machine.go:93] provisionDockerMachine start ...
	I1014 07:20:14.172377   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:16.225265   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:16.225265   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:16.225634   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:18.692061   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:18.692061   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:18.698261   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:18.713495   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:18.713568   13076 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 07:20:18.839311   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 07:20:18.839517   13076 buildroot.go:166] provisioning hostname "ha-132600"
	I1014 07:20:18.839517   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:20.911254   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:20.911424   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:20.911601   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:23.370138   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:23.370756   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:23.377008   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:23.377773   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:23.377773   13076 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-132600 && echo "ha-132600" | sudo tee /etc/hostname
	I1014 07:20:23.537548   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-132600
	
	I1014 07:20:23.537548   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:25.571429   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:25.571429   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:25.572326   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:28.068677   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:28.068677   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:28.077026   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:28.078615   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:28.078615   13076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-132600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-132600/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-132600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 07:20:28.216809   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 07:20:28.216874   13076 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1014 07:20:28.216990   13076 buildroot.go:174] setting up certificates
	I1014 07:20:28.217016   13076 provision.go:84] configureAuth start
	I1014 07:20:28.217047   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:30.246758   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:30.247093   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:30.247368   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:32.688428   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:32.688428   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:32.688428   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:34.748908   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:34.748908   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:34.749349   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:37.221628   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:37.221798   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:37.221798   13076 provision.go:143] copyHostCerts
	I1014 07:20:37.221998   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1014 07:20:37.222444   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1014 07:20:37.222558   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1014 07:20:37.223110   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1014 07:20:37.224109   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1014 07:20:37.224599   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1014 07:20:37.224599   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1014 07:20:37.224959   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I1014 07:20:37.225332   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1014 07:20:37.225332   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1014 07:20:37.225332   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1014 07:20:37.226765   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1014 07:20:37.227733   13076 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-132600 san=[127.0.0.1 172.20.108.120 ha-132600 localhost minikube]
	I1014 07:20:37.731674   13076 provision.go:177] copyRemoteCerts
	I1014 07:20:37.744706   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 07:20:37.744706   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:39.801309   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:39.801309   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:39.801309   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:42.273306   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:42.273367   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:42.273367   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:20:42.378794   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6340227s)
	I1014 07:20:42.378847   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1014 07:20:42.378847   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I1014 07:20:42.424772   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1014 07:20:42.425289   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 07:20:42.471397   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1014 07:20:42.471964   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 07:20:42.527104   13076 provision.go:87] duration metric: took 14.3100395s to configureAuth
	I1014 07:20:42.527104   13076 buildroot.go:189] setting minikube options for container-runtime
	I1014 07:20:42.527712   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:20:42.528252   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:44.598308   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:44.598308   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:44.599305   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:47.040424   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:47.040637   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:47.045726   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:47.046398   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:47.046398   13076 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1014 07:20:47.167710   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1014 07:20:47.167801   13076 buildroot.go:70] root file system type: tmpfs
	I1014 07:20:47.168208   13076 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1014 07:20:47.168315   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:49.202111   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:49.202331   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:49.202424   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:51.648278   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:51.648278   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:51.654248   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:51.655052   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:51.655052   13076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1014 07:20:51.814048   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1014 07:20:51.814229   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:53.871643   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:53.871643   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:53.872030   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:56.286054   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:56.286054   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:56.292357   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:56.293125   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:56.293125   13076 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1014 07:20:58.458028   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1014 07:20:58.458028   13076 machine.go:96] duration metric: took 44.286361s to provisionDockerMachine
	I1014 07:20:58.458028   13076 client.go:171] duration metric: took 1m52.7353371s to LocalClient.Create
	I1014 07:20:58.458028   13076 start.go:167] duration metric: took 1m52.7353371s to libmachine.API.Create "ha-132600"
	I1014 07:20:58.458028   13076 start.go:293] postStartSetup for "ha-132600" (driver="hyperv")
	I1014 07:20:58.458028   13076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 07:20:58.471262   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 07:20:58.471262   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:00.512590   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:00.513162   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:00.513321   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:02.924795   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:02.924929   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:02.925163   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:21:03.032582   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5613141s)
	I1014 07:21:03.044373   13076 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 07:21:03.050831   13076 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 07:21:03.050831   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1014 07:21:03.051271   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1014 07:21:03.052198   13076 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> 9362.pem in /etc/ssl/certs
	I1014 07:21:03.052295   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /etc/ssl/certs/9362.pem
	I1014 07:21:03.063873   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 07:21:03.081968   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /etc/ssl/certs/9362.pem (1708 bytes)
	I1014 07:21:03.128637   13076 start.go:296] duration metric: took 4.6706031s for postStartSetup
	I1014 07:21:03.132031   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:05.196102   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:05.196622   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:05.196696   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:07.624940   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:07.625781   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:07.626103   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:21:07.629046   13076 start.go:128] duration metric: took 2m1.9103356s to createHost
	I1014 07:21:07.629046   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:09.687162   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:09.687420   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:09.687490   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:12.104556   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:12.104556   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:12.110165   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:21:12.110572   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:21:12.110572   13076 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 07:21:12.241524   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728915672.238854784
	
	I1014 07:21:12.241655   13076 fix.go:216] guest clock: 1728915672.238854784
	I1014 07:21:12.241726   13076 fix.go:229] Guest: 2024-10-14 07:21:12.238854784 -0700 PDT Remote: 2024-10-14 07:21:07.6290467 -0700 PDT m=+127.381500801 (delta=4.609808084s)
	I1014 07:21:12.241841   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:14.299338   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:14.299599   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:14.299599   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:16.767477   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:16.767665   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:16.775565   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:21:16.775565   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:21:16.775565   13076 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1728915672
	I1014 07:21:16.921038   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 14 14:21:12 UTC 2024
	
	I1014 07:21:16.921096   13076 fix.go:236] clock set: Mon Oct 14 14:21:12 UTC 2024
	 (err=<nil>)
	I1014 07:21:16.921169   13076 start.go:83] releasing machines lock for "ha-132600", held for 2m11.2030498s
	I1014 07:21:16.921319   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:18.988904   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:18.989898   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:18.989947   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:21.410167   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:21.410403   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:21.414482   13076 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1014 07:21:21.414683   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:21.423844   13076 ssh_runner.go:195] Run: cat /version.json
	I1014 07:21:21.423844   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:23.554001   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:23.554206   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:23.554362   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:23.565882   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:23.565882   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:23.565882   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:26.092249   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:26.092249   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:26.092379   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:21:26.119019   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:26.119086   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:26.119215   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:21:26.179573   13076 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.764971s)
	W1014 07:21:26.179670   13076 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1014 07:21:26.212879   13076 ssh_runner.go:235] Completed: cat /version.json: (4.7890281s)
	I1014 07:21:26.223815   13076 ssh_runner.go:195] Run: systemctl --version
	I1014 07:21:26.243251   13076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 07:21:26.251321   13076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 07:21:26.262431   13076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 07:21:26.290027   13076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 07:21:26.290027   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:21:26.290027   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1014 07:21:26.296743   13076 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1014 07:21:26.296743   13076 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1014 07:21:26.340326   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1014 07:21:26.370362   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1014 07:21:26.391878   13076 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 07:21:26.402847   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 07:21:26.433241   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:21:26.462619   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 07:21:26.491735   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:21:26.520404   13076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 07:21:26.550615   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 07:21:26.580278   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 07:21:26.608564   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 07:21:26.636849   13076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 07:21:26.656448   13076 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 07:21:26.667504   13076 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 07:21:26.700184   13076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 07:21:26.726834   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:26.916163   13076 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 07:21:26.953200   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:21:26.967627   13076 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1014 07:21:27.005080   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:21:27.039193   13076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 07:21:27.083701   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:21:27.118670   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:21:27.153508   13076 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 07:21:27.209689   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:21:27.230765   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:21:27.273213   13076 ssh_runner.go:195] Run: which cri-dockerd
	I1014 07:21:27.291783   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1014 07:21:27.308581   13076 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1014 07:21:27.351709   13076 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1014 07:21:27.539571   13076 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1014 07:21:27.725567   13076 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1014 07:21:27.725567   13076 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1014 07:21:27.768723   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:27.951142   13076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:21:30.486027   13076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5347969s)
	I1014 07:21:30.497722   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1014 07:21:30.531680   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 07:21:30.563742   13076 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1014 07:21:30.767218   13076 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1014 07:21:30.967093   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:31.191192   13076 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1014 07:21:31.229757   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 07:21:31.267426   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:31.462408   13076 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1014 07:21:31.569509   13076 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1014 07:21:31.581151   13076 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1014 07:21:31.591322   13076 start.go:563] Will wait 60s for crictl version
	I1014 07:21:31.604845   13076 ssh_runner.go:195] Run: which crictl
	I1014 07:21:31.622874   13076 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 07:21:31.680621   13076 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1014 07:21:31.689423   13076 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 07:21:31.732594   13076 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 07:21:31.766375   13076 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1014 07:21:31.766375   13076 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:08:65:4d Flags:up|broadcast|multicast|running}
	I1014 07:21:31.773955   13076 ip.go:214] interface addr: fe80::b548:ff79:d464:75b8/64
	I1014 07:21:31.773955   13076 ip.go:214] interface addr: 172.20.96.1/20
	I1014 07:21:31.784171   13076 ssh_runner.go:195] Run: grep 172.20.96.1	host.minikube.internal$ /etc/hosts
	I1014 07:21:31.791192   13076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 07:21:31.825649   13076 kubeadm.go:883] updating cluster {Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 07:21:31.825649   13076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:21:31.834667   13076 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 07:21:31.867707   13076 docker.go:689] Got preloaded images: 
	I1014 07:21:31.867707   13076 docker.go:695] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I1014 07:21:31.880238   13076 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1014 07:21:31.908893   13076 ssh_runner.go:195] Run: which lz4
	I1014 07:21:31.914374   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1014 07:21:31.924715   13076 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 07:21:31.930846   13076 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 07:21:31.931075   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I1014 07:21:33.829320   13076 docker.go:653] duration metric: took 1.9148657s to copy over tarball
	I1014 07:21:33.840382   13076 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 07:21:42.539043   13076 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6986506s)
	I1014 07:21:42.539174   13076 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 07:21:42.611065   13076 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1014 07:21:42.636639   13076 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I1014 07:21:42.681041   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:42.879066   13076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:21:46.161713   13076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.2824935s)
	I1014 07:21:46.173131   13076 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 07:21:46.198645   13076 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1014 07:21:46.198645   13076 cache_images.go:84] Images are preloaded, skipping loading
	I1014 07:21:46.198645   13076 kubeadm.go:934] updating node { 172.20.108.120 8443 v1.31.1 docker true true} ...
	I1014 07:21:46.198645   13076 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-132600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.108.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 07:21:46.209892   13076 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1014 07:21:46.283733   13076 cni.go:84] Creating CNI manager for ""
	I1014 07:21:46.283861   13076 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 07:21:46.283932   13076 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 07:21:46.284000   13076 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.108.120 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-132600 NodeName:ha-132600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.108.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.108.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 07:21:46.284290   13076 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.108.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-132600"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.20.108.120"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.108.120"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 07:21:46.284367   13076 kube-vip.go:115] generating kube-vip config ...
	I1014 07:21:46.295693   13076 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1014 07:21:46.324017   13076 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1014 07:21:46.324230   13076 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.20.111.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1014 07:21:46.334184   13076 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 07:21:46.349321   13076 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 07:21:46.360771   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 07:21:46.379224   13076 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I1014 07:21:46.412080   13076 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 07:21:46.441691   13076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1014 07:21:46.479025   13076 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1014 07:21:46.519461   13076 ssh_runner.go:195] Run: grep 172.20.111.254	control-plane.minikube.internal$ /etc/hosts
	I1014 07:21:46.525021   13076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.111.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 07:21:46.554943   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:46.730466   13076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 07:21:46.761153   13076 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600 for IP: 172.20.108.120
	I1014 07:21:46.761153   13076 certs.go:194] generating shared ca certs ...
	I1014 07:21:46.761153   13076 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:46.761825   13076 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I1014 07:21:46.762680   13076 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I1014 07:21:46.762846   13076 certs.go:256] generating profile certs ...
	I1014 07:21:46.763605   13076 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.key
	I1014 07:21:46.763605   13076 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.crt with IP's: []
	I1014 07:21:46.955865   13076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.crt ...
	I1014 07:21:46.955865   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.crt: {Name:mk4a5e51a4b9d62279c7ef4ef47716bcdf88d704 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:46.957857   13076 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.key ...
	I1014 07:21:46.957857   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.key: {Name:mkc80e99cfe4d8d17198a87fb188cfa1a72d727d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:46.958881   13076 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61
	I1014 07:21:46.958881   13076 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.108.120 172.20.111.254]
	I1014 07:21:47.375660   13076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61 ...
	I1014 07:21:47.375660   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61: {Name:mke858ab0ac103663123708f9814cfdffe03211e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.377216   13076 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61 ...
	I1014 07:21:47.377216   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61: {Name:mk972fca892a193d6b4e84d42e27ed86ce9f0457 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.377811   13076 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt
	I1014 07:21:47.391806   13076 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key
	I1014 07:21:47.392823   13076 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key
	I1014 07:21:47.392823   13076 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt with IP's: []
	I1014 07:21:47.675751   13076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt ...
	I1014 07:21:47.675751   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt: {Name:mk5a9f1239213dd0a0f36b4ee41d5ac56cbd79c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.677918   13076 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key ...
	I1014 07:21:47.677918   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key: {Name:mkb931094180fef1516b0d43acb6430ecbcbb3fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 07:21:47.679943   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 07:21:47.679943   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 07:21:47.679943   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 07:21:47.691415   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 07:21:47.693468   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem (1338 bytes)
	W1014 07:21:47.693468   13076 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936_empty.pem, impossibly tiny 0 bytes
	I1014 07:21:47.694004   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1014 07:21:47.694400   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1014 07:21:47.694671   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1014 07:21:47.695091   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1014 07:21:47.695960   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem (1708 bytes)
	I1014 07:21:47.696145   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem -> /usr/share/ca-certificates/936.pem
	I1014 07:21:47.696401   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /usr/share/ca-certificates/9362.pem
	I1014 07:21:47.696580   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:47.697580   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 07:21:47.742705   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 07:21:47.790988   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 07:21:47.837975   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 07:21:47.884872   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 07:21:47.936878   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 07:21:47.985330   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 07:21:48.029181   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 07:21:48.078824   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem --> /usr/share/ca-certificates/936.pem (1338 bytes)
	I1014 07:21:48.127907   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /usr/share/ca-certificates/9362.pem (1708 bytes)
	I1014 07:21:48.174542   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 07:21:48.218635   13076 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 07:21:48.264009   13076 ssh_runner.go:195] Run: openssl version
	I1014 07:21:48.282939   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 07:21:48.310256   13076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:48.318050   13076 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:44 /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:48.328768   13076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:48.348090   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 07:21:48.376484   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936.pem && ln -fs /usr/share/ca-certificates/936.pem /etc/ssl/certs/936.pem"
	I1014 07:21:48.408494   13076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936.pem
	I1014 07:21:48.416513   13076 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 14:00 /usr/share/ca-certificates/936.pem
	I1014 07:21:48.427815   13076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936.pem
	I1014 07:21:48.447033   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/936.pem /etc/ssl/certs/51391683.0"
	I1014 07:21:48.474293   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9362.pem && ln -fs /usr/share/ca-certificates/9362.pem /etc/ssl/certs/9362.pem"
	I1014 07:21:48.502293   13076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9362.pem
	I1014 07:21:48.509688   13076 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 14:00 /usr/share/ca-certificates/9362.pem
	I1014 07:21:48.519811   13076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9362.pem
	I1014 07:21:48.540403   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9362.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 07:21:48.569007   13076 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 07:21:48.575758   13076 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 07:21:48.576118   13076 kubeadm.go:392] StartCluster: {Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clu
sterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:21:48.584978   13076 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1014 07:21:48.619430   13076 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 07:21:48.647606   13076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 07:21:48.676291   13076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 07:21:48.693679   13076 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 07:21:48.693679   13076 kubeadm.go:157] found existing configuration files:
	
	I1014 07:21:48.703613   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 07:21:48.722067   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 07:21:48.732618   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 07:21:48.759836   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 07:21:48.777622   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 07:21:48.789748   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 07:21:48.818512   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 07:21:48.837396   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 07:21:48.848696   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 07:21:48.876926   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 07:21:48.893530   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 07:21:48.904372   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 07:21:48.920210   13076 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 07:21:49.392185   13076 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 07:22:04.140256   13076 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 07:22:04.140412   13076 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 07:22:04.140501   13076 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 07:22:04.140848   13076 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 07:22:04.141206   13076 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 07:22:04.141311   13076 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 07:22:04.144180   13076 out.go:235]   - Generating certificates and keys ...
	I1014 07:22:04.144373   13076 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 07:22:04.144575   13076 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 07:22:04.144939   13076 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 07:22:04.145172   13076 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1014 07:22:04.145315   13076 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1014 07:22:04.145521   13076 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1014 07:22:04.145845   13076 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1014 07:22:04.146212   13076 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-132600 localhost] and IPs [172.20.108.120 127.0.0.1 ::1]
	I1014 07:22:04.146365   13076 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1014 07:22:04.146614   13076 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-132600 localhost] and IPs [172.20.108.120 127.0.0.1 ::1]
	I1014 07:22:04.147003   13076 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 07:22:04.147200   13076 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 07:22:04.147236   13076 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1014 07:22:04.147236   13076 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 07:22:04.147236   13076 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 07:22:04.147236   13076 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 07:22:04.147954   13076 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 07:22:04.147987   13076 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 07:22:04.147987   13076 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 07:22:04.147987   13076 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 07:22:04.148785   13076 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 07:22:04.151235   13076 out.go:235]   - Booting up control plane ...
	I1014 07:22:04.151235   13076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 07:22:04.151819   13076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 07:22:04.152046   13076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 07:22:04.152333   13076 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 07:22:04.153426   13076 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 523.263843ms
	I1014 07:22:04.153534   13076 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 07:22:04.153534   13076 kubeadm.go:310] [api-check] The API server is healthy after 9.176461865s
	I1014 07:22:04.153534   13076 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 07:22:04.154243   13076 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 07:22:04.154243   13076 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 07:22:04.154779   13076 kubeadm.go:310] [mark-control-plane] Marking the node ha-132600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 07:22:04.154779   13076 kubeadm.go:310] [bootstrap-token] Using token: rm3d8a.inqsemtt5b5l5x2e
	I1014 07:22:04.157926   13076 out.go:235]   - Configuring RBAC rules ...
	I1014 07:22:04.158760   13076 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 07:22:04.158928   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 07:22:04.158928   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 07:22:04.159656   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 07:22:04.159851   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 07:22:04.160089   13076 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 07:22:04.160270   13076 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 07:22:04.160529   13076 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 07:22:04.160529   13076 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 07:22:04.160529   13076 kubeadm.go:310] 
	I1014 07:22:04.160529   13076 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 07:22:04.160529   13076 kubeadm.go:310] 
	I1014 07:22:04.160529   13076 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 07:22:04.160529   13076 kubeadm.go:310] 
	I1014 07:22:04.161100   13076 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 07:22:04.161186   13076 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 07:22:04.161352   13076 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 07:22:04.161352   13076 kubeadm.go:310] 
	I1014 07:22:04.161534   13076 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 07:22:04.161534   13076 kubeadm.go:310] 
	I1014 07:22:04.161706   13076 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 07:22:04.161706   13076 kubeadm.go:310] 
	I1014 07:22:04.161950   13076 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 07:22:04.161950   13076 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 07:22:04.161950   13076 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 07:22:04.161950   13076 kubeadm.go:310] 
	I1014 07:22:04.162515   13076 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 07:22:04.162607   13076 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 07:22:04.162607   13076 kubeadm.go:310] 
	I1014 07:22:04.162607   13076 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rm3d8a.inqsemtt5b5l5x2e \
	I1014 07:22:04.162607   13076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f876f951efbe7f1ed47ca578ee0d33e6ce8fe69c1a008a8154ee00f56ac84e46 \
	I1014 07:22:04.163162   13076 kubeadm.go:310] 	--control-plane 
	I1014 07:22:04.163261   13076 kubeadm.go:310] 
	I1014 07:22:04.163340   13076 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 07:22:04.163340   13076 kubeadm.go:310] 
	I1014 07:22:04.163570   13076 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rm3d8a.inqsemtt5b5l5x2e \
	I1014 07:22:04.163570   13076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f876f951efbe7f1ed47ca578ee0d33e6ce8fe69c1a008a8154ee00f56ac84e46 
	I1014 07:22:04.163570   13076 cni.go:84] Creating CNI manager for ""
	I1014 07:22:04.163570   13076 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 07:22:04.167124   13076 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1014 07:22:04.179134   13076 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 07:22:04.187701   13076 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1014 07:22:04.187701   13076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1014 07:22:04.233053   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1014 07:22:04.912894   13076 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 07:22:04.925574   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-132600 minikube.k8s.io/updated_at=2024_10_14T07_22_04_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=ha-132600 minikube.k8s.io/primary=true
	I1014 07:22:04.927795   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:04.958241   13076 ops.go:34] apiserver oom_adj: -16
	I1014 07:22:05.168034   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:05.667048   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:06.166157   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:06.667561   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:07.167283   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:07.305312   13076 kubeadm.go:1113] duration metric: took 2.3924152s to wait for elevateKubeSystemPrivileges
	I1014 07:22:07.306284   13076 kubeadm.go:394] duration metric: took 18.730142s to StartCluster
	I1014 07:22:07.306284   13076 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:22:07.306284   13076 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:22:07.308077   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:22:07.309777   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 07:22:07.309777   13076 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:22:07.309953   13076 start.go:241] waiting for startup goroutines ...
	I1014 07:22:07.309868   13076 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 07:22:07.310143   13076 addons.go:69] Setting storage-provisioner=true in profile "ha-132600"
	I1014 07:22:07.310235   13076 addons.go:234] Setting addon storage-provisioner=true in "ha-132600"
	I1014 07:22:07.310143   13076 addons.go:69] Setting default-storageclass=true in profile "ha-132600"
	I1014 07:22:07.310402   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:22:07.310402   13076 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-132600"
	I1014 07:22:07.310402   13076 host.go:66] Checking if "ha-132600" exists ...
	I1014 07:22:07.311457   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:07.311926   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:07.495748   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.96.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 07:22:08.130501   13076 start.go:971] {"host.minikube.internal": 172.20.96.1} host record injected into CoreDNS's ConfigMap
	I1014 07:22:09.594501   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:09.594501   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:09.594786   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:09.594878   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:09.595740   13076 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:22:09.596691   13076 kapi.go:59] client config for ha-132600: &rest.Config{Host:"https://172.20.111.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-132600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-132600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2926ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 07:22:09.597939   13076 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:22:09.598267   13076 cert_rotation.go:140] Starting client certificate rotation controller
	I1014 07:22:09.598703   13076 addons.go:234] Setting addon default-storageclass=true in "ha-132600"
	I1014 07:22:09.598956   13076 host.go:66] Checking if "ha-132600" exists ...
	I1014 07:22:09.600143   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:09.600143   13076 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 07:22:09.600247   13076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 07:22:09.600352   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:11.901872   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:11.901872   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:11.902855   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:22:11.913525   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:11.914554   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:11.914725   13076 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 07:22:11.914841   13076 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 07:22:11.915002   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:14.163865   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:14.163865   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:14.163865   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:22:14.597752   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:22:14.597752   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:14.598273   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:22:14.755906   13076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 07:22:16.766331   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:22:16.766798   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:16.767079   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:22:16.907784   13076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 07:22:17.107355   13076 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 07:22:17.107355   13076 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 07:22:17.108322   13076 round_trippers.go:463] GET https://172.20.111.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1014 07:22:17.108322   13076 round_trippers.go:469] Request Headers:
	I1014 07:22:17.108322   13076 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:22:17.108322   13076 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:22:17.120122   13076 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1014 07:22:17.120927   13076 round_trippers.go:463] PUT https://172.20.111.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1014 07:22:17.121006   13076 round_trippers.go:469] Request Headers:
	I1014 07:22:17.121006   13076 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:22:17.121006   13076 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:22:17.121089   13076 round_trippers.go:473]     Content-Type: application/json
	I1014 07:22:17.124876   13076 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 07:22:17.128782   13076 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1014 07:22:17.133792   13076 addons.go:510] duration metric: took 9.8239112s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1014 07:22:17.133792   13076 start.go:246] waiting for cluster config update ...
	I1014 07:22:17.133792   13076 start.go:255] writing updated cluster config ...
	I1014 07:22:17.140793   13076 out.go:201] 
	I1014 07:22:17.150794   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:22:17.151784   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:22:17.157758   13076 out.go:177] * Starting "ha-132600-m02" control-plane node in "ha-132600" cluster
	I1014 07:22:17.160381   13076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:22:17.160381   13076 cache.go:56] Caching tarball of preloaded images
	I1014 07:22:17.160381   13076 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1014 07:22:17.160381   13076 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:22:17.160381   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:22:17.168672   13076 start.go:360] acquireMachinesLock for ha-132600-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:22:17.169194   13076 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-132600-m02"
	I1014 07:22:17.169334   13076 start.go:93] Provisioning new machine with config: &{Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:22:17.169334   13076 start.go:125] createHost starting for "m02" (driver="hyperv")
	I1014 07:22:17.171957   13076 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:22:17.171957   13076 start.go:159] libmachine.API.Create for "ha-132600" (driver="hyperv")
	I1014 07:22:17.171957   13076 client.go:168] LocalClient.Create starting
	I1014 07:22:17.171957   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:22:17.173802   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:22:17.173802   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1014 07:22:19.094383   13076 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1014 07:22:19.094383   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:19.094970   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1014 07:22:20.808796   13076 main.go:141] libmachine: [stdout =====>] : False
	
	I1014 07:22:20.808796   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:20.808897   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:22:22.292959   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:22:22.292959   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:22.293074   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:22:25.811891   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:22:25.811891   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:25.813583   13076 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 07:22:26.334218   13076 main.go:141] libmachine: Creating SSH key...
	I1014 07:22:26.579833   13076 main.go:141] libmachine: Creating VM...
	I1014 07:22:26.580305   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:22:29.402237   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:22:29.402237   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:29.402237   13076 main.go:141] libmachine: Using switch "Default Switch"
	I1014 07:22:29.402237   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:22:31.181025   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:22:31.181025   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:31.181025   13076 main.go:141] libmachine: Creating VHD
	I1014 07:22:31.181025   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I1014 07:22:34.986705   13076 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 9BEBB030-6569-4A5D-A43D-C976E242B24D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1014 07:22:34.986953   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:34.986953   13076 main.go:141] libmachine: Writing magic tar header
	I1014 07:22:34.986953   13076 main.go:141] libmachine: Writing SSH key tar header
	I1014 07:22:34.998864   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I1014 07:22:38.117497   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:38.117497   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:38.117497   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\disk.vhd' -SizeBytes 20000MB
	I1014 07:22:40.750048   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:40.750048   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:40.750048   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-132600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1014 07:22:44.293371   13076 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-132600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1014 07:22:44.294009   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:44.294158   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-132600-m02 -DynamicMemoryEnabled $false
	I1014 07:22:46.495846   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:46.496106   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:46.496211   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-132600-m02 -Count 2
	I1014 07:22:48.637702   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:48.637702   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:48.637702   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-132600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\boot2docker.iso'
	I1014 07:22:51.128606   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:51.128606   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:51.128606   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-132600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\disk.vhd'
	I1014 07:22:53.765202   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:53.765775   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:53.765775   13076 main.go:141] libmachine: Starting VM...
	I1014 07:22:53.765847   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-132600-m02
	I1014 07:22:56.961396   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:56.961396   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:56.962040   13076 main.go:141] libmachine: Waiting for host to start...
	I1014 07:22:56.962040   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:22:59.251592   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:59.251592   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:59.251838   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:01.758067   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:01.758159   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:02.758293   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:04.921490   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:04.921565   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:04.921802   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:07.441595   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:07.441595   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:08.443053   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:10.659721   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:10.659721   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:10.660418   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:13.126618   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:13.126952   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:14.127183   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:16.307414   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:16.307414   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:16.307500   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:18.748902   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:18.749678   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:19.749962   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:21.893323   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:21.893323   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:21.893323   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:24.464351   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:24.465392   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:24.465471   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:26.567626   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:26.567626   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:26.567626   13076 machine.go:93] provisionDockerMachine start ...
	I1014 07:23:26.568400   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:28.681591   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:28.681591   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:28.682618   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:31.178308   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:31.178308   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:31.192309   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:31.207523   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:31.208024   13076 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 07:23:31.337861   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 07:23:31.337861   13076 buildroot.go:166] provisioning hostname "ha-132600-m02"
	I1014 07:23:31.337941   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:33.402839   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:33.402839   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:33.403340   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:35.894312   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:35.894312   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:35.901210   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:35.901699   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:35.901823   13076 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-132600-m02 && echo "ha-132600-m02" | sudo tee /etc/hostname
	I1014 07:23:36.071040   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-132600-m02
	
	I1014 07:23:36.071040   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:38.150707   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:38.151729   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:38.151729   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:40.639242   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:40.639242   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:40.644598   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:40.645419   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:40.645490   13076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-132600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-132600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-132600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 07:23:40.801974   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 07:23:40.801974   13076 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1014 07:23:40.801974   13076 buildroot.go:174] setting up certificates
	I1014 07:23:40.801974   13076 provision.go:84] configureAuth start
	I1014 07:23:40.801974   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:42.869101   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:42.869101   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:42.869101   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:45.382211   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:45.382688   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:45.382688   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:47.501182   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:47.501778   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:47.501832   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:50.008838   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:50.008838   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:50.009193   13076 provision.go:143] copyHostCerts
	I1014 07:23:50.009346   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1014 07:23:50.009346   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1014 07:23:50.009346   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1014 07:23:50.010015   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1014 07:23:50.011395   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1014 07:23:50.011967   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1014 07:23:50.011967   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1014 07:23:50.011967   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1014 07:23:50.013212   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1014 07:23:50.014109   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1014 07:23:50.014109   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1014 07:23:50.014507   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I1014 07:23:50.015649   13076 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-132600-m02 san=[127.0.0.1 172.20.111.83 ha-132600-m02 localhost minikube]
	I1014 07:23:50.130152   13076 provision.go:177] copyRemoteCerts
	I1014 07:23:50.140319   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 07:23:50.140319   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:52.242533   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:52.242533   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:52.243067   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:54.757999   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:54.758132   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:54.758132   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:23:54.867658   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7273333s)
	I1014 07:23:54.867658   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1014 07:23:54.867658   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 07:23:54.915126   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1014 07:23:54.915808   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 07:23:54.962775   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1014 07:23:54.963449   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 07:23:55.012733   13076 provision.go:87] duration metric: took 14.2107408s to configureAuth
	I1014 07:23:55.012733   13076 buildroot.go:189] setting minikube options for container-runtime
	I1014 07:23:55.013734   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:23:55.013734   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:57.174688   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:57.174688   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:57.175736   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:59.668037   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:59.668597   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:59.675074   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:59.675608   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:59.675873   13076 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1014 07:23:59.826626   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1014 07:23:59.826626   13076 buildroot.go:70] root file system type: tmpfs
	I1014 07:23:59.826926   13076 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1014 07:23:59.827038   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:01.963890   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:01.964568   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:01.964568   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:04.515125   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:04.515943   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:04.521824   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:04.522248   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:04.522248   13076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.20.108.120"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1014 07:24:04.691427   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.20.108.120
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1014 07:24:04.692090   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:06.819823   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:06.820339   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:06.820339   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:09.321630   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:09.321630   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:09.326748   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:09.327748   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:09.327808   13076 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1014 07:24:11.561493   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1014 07:24:11.561493   13076 machine.go:96] duration metric: took 44.993809s to provisionDockerMachine
	I1014 07:24:11.561493   13076 client.go:171] duration metric: took 1m54.3893961s to LocalClient.Create
	I1014 07:24:11.561493   13076 start.go:167] duration metric: took 1m54.3893961s to libmachine.API.Create "ha-132600"
	I1014 07:24:11.561493   13076 start.go:293] postStartSetup for "ha-132600-m02" (driver="hyperv")
	I1014 07:24:11.561493   13076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 07:24:11.572490   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 07:24:11.572490   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:13.652544   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:13.653401   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:13.653599   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:16.188638   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:16.188638   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:16.189304   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:24:16.298812   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7262867s)
	I1014 07:24:16.310231   13076 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 07:24:16.316927   13076 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 07:24:16.316927   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1014 07:24:16.317465   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1014 07:24:16.318548   13076 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> 9362.pem in /etc/ssl/certs
	I1014 07:24:16.318548   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /etc/ssl/certs/9362.pem
	I1014 07:24:16.332285   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 07:24:16.351469   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /etc/ssl/certs/9362.pem (1708 bytes)
	I1014 07:24:16.401820   13076 start.go:296] duration metric: took 4.8403207s for postStartSetup
	I1014 07:24:16.404096   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:18.500218   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:18.501234   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:18.501234   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:20.974982   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:20.975125   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:20.975125   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:24:20.978124   13076 start.go:128] duration metric: took 2m3.8086365s to createHost
	I1014 07:24:20.978209   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:23.112937   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:23.113001   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:23.113001   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:25.636675   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:25.637206   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:25.641953   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:25.642670   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:25.642670   13076 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 07:24:25.768945   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728915865.766702127
	
	I1014 07:24:25.769174   13076 fix.go:216] guest clock: 1728915865.766702127
	I1014 07:24:25.769174   13076 fix.go:229] Guest: 2024-10-14 07:24:25.766702127 -0700 PDT Remote: 2024-10-14 07:24:20.978124 -0700 PDT m=+320.730336401 (delta=4.788578127s)
	I1014 07:24:25.769174   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:27.845838   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:27.845838   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:27.845838   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:30.364175   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:30.364263   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:30.369382   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:30.369967   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:30.370049   13076 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1728915865
	I1014 07:24:30.508807   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 14 14:24:25 UTC 2024
	
	I1014 07:24:30.508807   13076 fix.go:236] clock set: Mon Oct 14 14:24:25 UTC 2024
	 (err=<nil>)
	I1014 07:24:30.508807   13076 start.go:83] releasing machines lock for "ha-132600-m02", held for 2m13.3394475s
	I1014 07:24:30.508807   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:32.570516   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:32.570516   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:32.570516   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:35.081338   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:35.081338   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:35.084330   13076 out.go:177] * Found network options:
	I1014 07:24:35.087197   13076 out.go:177]   - NO_PROXY=172.20.108.120
	W1014 07:24:35.089531   13076 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 07:24:35.091879   13076 out.go:177]   - NO_PROXY=172.20.108.120
	W1014 07:24:35.094409   13076 proxy.go:119] fail to check proxy env: Error ip not in block
	W1014 07:24:35.095622   13076 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 07:24:35.098960   13076 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1014 07:24:35.099124   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:35.109488   13076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1014 07:24:35.109488   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:37.293745   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:37.293818   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:37.293870   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:37.344951   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:37.344951   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:37.344951   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:39.933767   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:39.934139   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:39.934445   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:24:39.999458   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:39.999458   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:39.999458   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:24:40.039802   13076 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.930308s)
	W1014 07:24:40.039802   13076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 07:24:40.051889   13076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 07:24:40.057601   13076 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9585844s)
	W1014 07:24:40.057601   13076 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1014 07:24:40.090774   13076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 07:24:40.090774   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:24:40.091315   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:24:40.139757   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1014 07:24:40.172033   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1014 07:24:40.178804   13076 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1014 07:24:40.178804   13076 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1014 07:24:40.194040   13076 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 07:24:40.209169   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 07:24:40.241042   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:24:40.273436   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 07:24:40.304842   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:24:40.336705   13076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 07:24:40.371635   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 07:24:40.404378   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 07:24:40.435805   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 07:24:40.485319   13076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 07:24:40.505473   13076 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 07:24:40.516975   13076 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 07:24:40.550905   13076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 07:24:40.581632   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:24:40.785773   13076 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 07:24:40.819978   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:24:40.830992   13076 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1014 07:24:40.876801   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:24:40.913930   13076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 07:24:40.966444   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:24:41.006432   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:24:41.044531   13076 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 07:24:41.114617   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:24:41.139995   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:24:41.187779   13076 ssh_runner.go:195] Run: which cri-dockerd
	I1014 07:24:41.207367   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1014 07:24:41.226988   13076 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1014 07:24:41.272924   13076 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1014 07:24:41.470204   13076 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1014 07:24:41.650925   13076 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1014 07:24:41.650925   13076 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1014 07:24:41.694417   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:24:41.893295   13076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:25:43.008408   13076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.115034s)
	I1014 07:25:43.020423   13076 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1014 07:25:43.059257   13076 out.go:201] 
	W1014 07:25:43.062124   13076 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 14 14:24:09 ha-132600-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.938489944Z" level=info msg="Starting up"
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.941531731Z" level=info msg="containerd not running, starting managed containerd"
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.942638427Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	Oct 14 14:24:09 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:09.976195986Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.003978070Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004049170Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004119469Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004233069Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004318268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004457268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004664767Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004761167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004782067Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004795467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004887666Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.005331964Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008453752Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008545052Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008673551Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008759151Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008859951Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008988550Z" level=info msg="metadata content store policy set" policy=shared
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034095951Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034335550Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034407650Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034432250Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034449450Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034603849Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034945948Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035129847Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035263947Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035294447Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035312846Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035328546Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035343646Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035419846Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035443346Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035495946Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035526946Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035542746Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035565345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035583545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035598745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035630745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035648245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035663945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035689645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035709545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035731545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035750545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035764245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035777845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035792145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035809345Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035833944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035849444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035864444Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036025444Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036057644Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036074443Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036087843Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036099043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036115343Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036131843Z" level=info msg="NRI interface is disabled by configuration."
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036482842Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036737141Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036799541Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036822041Z" level=info msg="containerd successfully booted in 0.062283s"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.009541514Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.046546477Z" level=info msg="Loading containers: start."
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.205429391Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.426461627Z" level=info msg="Loading containers: done."
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448023414Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448125314Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448200613Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448511212Z" level=info msg="Daemon has completed initialization"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.556153647Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.556338446Z" level=info msg="API listen on [::]:2376"
	Oct 14 14:24:11 ha-132600-m02 systemd[1]: Started Docker Application Container Engine.
	Oct 14 14:24:41 ha-132600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.916201367Z" level=info msg="Processing signal 'terminated'"
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.918697964Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919058063Z" level=info msg="Daemon shutdown complete"
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919215663Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919252063Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: docker.service: Deactivated successfully.
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 14 14:24:42 ha-132600-m02 dockerd[1075]: time="2024-10-14T14:24:42.978494717Z" level=info msg="Starting up"
	Oct 14 14:25:43 ha-132600-m02 dockerd[1075]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1014 07:25:43.062124   13076 out.go:270] * 
	W1014 07:25:43.062827   13076 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:25:43.067027   13076 out.go:201] 
	
	
	==> Docker <==
	Oct 14 14:22:32 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:22:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d7137fb8ec569089599a03e11d18eda509d210d29fa54b5e6a0a8d7dd7a54e7f/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 14:22:32 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:22:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/90ffeb0ce1aafcb639832f7144bd2dbb0b5c9deb53555b49e386b3d3e96d8bc3/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 14:22:32 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:22:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7278ecf7cc9466e1fee2170d699055f6498d2ece8da31c4b3696ed85b3cd38cf/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922383735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922463635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922481235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922580735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.996609949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.996807948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.996907948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.998275745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.069443031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.070022429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.070369527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.076841098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675114485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675249284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675264584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675974482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:17 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:26:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/91751edf004eb5236aa2443a5cff30bdc82ba05d39a3583d9026a4f19faba52f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 14 14:26:19 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:26:19Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.456978196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.457173796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.457616296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.457980197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2b8d44d586369       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   17 minutes ago      Running             busybox                   0                   91751edf004eb       busybox-7dff88458-kr92j
	5a6196684fc6e       c69fa2e9cbf5f                                                                                         21 minutes ago      Running             coredns                   0                   7278ecf7cc946       coredns-7c65d6cfc9-4qfrq
	81d6fdac8115f       6e38f40d628db                                                                                         21 minutes ago      Running             storage-provisioner       0                   90ffeb0ce1aaf       storage-provisioner
	52c3e5370a6c8       c69fa2e9cbf5f                                                                                         21 minutes ago      Running             coredns                   0                   d7137fb8ec569       coredns-7c65d6cfc9-zf6cd
	dae2c0aa67af3       kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387              21 minutes ago      Running             kindnet-cni               0                   277e64b59b8aa       kindnet-rkjqr
	4745a4b0dc379       60c005f310ff3                                                                                         21 minutes ago      Running             kube-proxy                0                   15f605d31df67       kube-proxy-zkbj8
	b08f7a3c2a5a6       ghcr.io/kube-vip/kube-vip@sha256:9e23baad11ae3e69d739430b9fdb60df22356b7da4b4f4e458fae0541619deb4     21 minutes ago      Running             kube-vip                  0                   46cfd7b278d40       kube-vip-ha-132600
	84fd76d493d80       2e96e5913fc06                                                                                         21 minutes ago      Running             etcd                      0                   d71566d00e2f9       etcd-ha-132600
	b661cb6713103       6bab7719df100                                                                                         21 minutes ago      Running             kube-apiserver            0                   cb18be6165830       kube-apiserver-ha-132600
	35c870864a800       9aa1fad941575                                                                                         21 minutes ago      Running             kube-scheduler            0                   36593363b631a       kube-scheduler-ha-132600
	4a8cce31aa3a2       175ffd71cce3d                                                                                         21 minutes ago      Running             kube-controller-manager   0                   f5fa69fe68b07       kube-controller-manager-ha-132600
	
	
	==> coredns [52c3e5370a6c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41885 - 43638 "HINFO IN 5147527986633681541.7511835962423692488. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.048565283s
	[INFO] 10.244.0.4:49801 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0004036s
	[INFO] 10.244.0.4:57121 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.178873632s
	[INFO] 10.244.0.4:54985 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000301s
	[INFO] 10.244.0.4:47603 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000273799s
	[INFO] 10.244.0.4:50030 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000276399s
	[INFO] 10.244.0.4:38124 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000439399s
	[INFO] 10.244.0.4:43764 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000347699s
	[INFO] 10.244.0.4:53583 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0003446s
	[INFO] 10.244.0.4:50771 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0002502s
	[INFO] 10.244.0.4:32805 - 5 "PTR IN 1.96.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0002399s
	
	
	==> coredns [5a6196684fc6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50464 - 22593 "HINFO IN 8090798371432773748.7872157757733337044. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.084180423s
	[INFO] 10.244.0.4:46373 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.004105792s
	[INFO] 10.244.0.4:41175 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.166814894s
	[INFO] 10.244.0.4:43040 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002867s
	[INFO] 10.244.0.4:33549 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.045685805s
	[INFO] 10.244.0.4:60793 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000884598s
	[INFO] 10.244.0.4:57558 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001339s
	[INFO] 10.244.0.4:33138 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.040147217s
	[INFO] 10.244.0.4:46303 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0002673s
	[INFO] 10.244.0.4:60761 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001771s
	[INFO] 10.244.0.4:55567 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000249599s
	
	
	==> describe nodes <==
	Name:               ha-132600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-132600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=ha-132600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_14T07_22_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 14:22:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-132600
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:43:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 14:41:59 +0000   Mon, 14 Oct 2024 14:22:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 14:41:59 +0000   Mon, 14 Oct 2024 14:22:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 14:41:59 +0000   Mon, 14 Oct 2024 14:22:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 14:41:59 +0000   Mon, 14 Oct 2024 14:22:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.108.120
	  Hostname:    ha-132600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 07bcc58a0c7e4f0eaae96253dcf0a4ae
	  System UUID:                e06d00fc-5ebc-1a49-bf31-4c6ce500fe9c
	  Boot ID:                    215d765d-07e3-4bb7-ba3c-58ab142d5afa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kr92j              0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 coredns-7c65d6cfc9-4qfrq             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 coredns-7c65d6cfc9-zf6cd             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-ha-132600                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kindnet-rkjqr                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	  kube-system                 kube-apiserver-ha-132600             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-ha-132600    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-zkbj8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-ha-132600             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-vip-ha-132600                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 21m   kube-proxy       
	  Normal  Starting                 21m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m   kubelet          Node ha-132600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m   kubelet          Node ha-132600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m   kubelet          Node ha-132600 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m   node-controller  Node ha-132600 event: Registered Node ha-132600 in Controller
	  Normal  NodeReady                21m   kubelet          Node ha-132600 status is now: NodeReady
	
	
	Name:               ha-132600-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-132600-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=ha-132600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_14T07_42_14_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 14:42:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-132600-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:43:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 14:43:14 +0000   Mon, 14 Oct 2024 14:42:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 14:43:14 +0000   Mon, 14 Oct 2024 14:42:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 14:43:14 +0000   Mon, 14 Oct 2024 14:42:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 14:43:14 +0000   Mon, 14 Oct 2024 14:42:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.111.174
	  Hostname:    ha-132600-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 3833a05e76bc4842ac73a1d2223670ec
	  System UUID:                f1d27ade-116b-0642-ac95-727d62870b2a
	  Boot ID:                    07765716-ab52-4f7d-8d78-8fbd996c8cc0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8thz6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kindnet-dznf8              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      88s
	  kube-system                 kube-proxy-q6wxd           0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 76s                kube-proxy       
	  Normal  NodeHasSufficientMemory  89s (x2 over 89s)  kubelet          Node ha-132600-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  89s                kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     88s                cidrAllocator    Node ha-132600-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasNoDiskPressure    88s (x2 over 89s)  kubelet          Node ha-132600-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s (x2 over 89s)  kubelet          Node ha-132600-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           85s                node-controller  Node ha-132600-m03 event: Registered Node ha-132600-m03 in Controller
	  Normal  NodeReady                56s                kubelet          Node ha-132600-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.050916] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +45.956887] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.179434] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[Oct14 14:21] systemd-fstab-generator[997]: Ignoring "noauto" option for root device
	[  +0.095957] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.527367] systemd-fstab-generator[1037]: Ignoring "noauto" option for root device
	[  +0.185829] systemd-fstab-generator[1049]: Ignoring "noauto" option for root device
	[  +0.220100] systemd-fstab-generator[1063]: Ignoring "noauto" option for root device
	[  +2.802946] systemd-fstab-generator[1277]: Ignoring "noauto" option for root device
	[  +0.209648] systemd-fstab-generator[1289]: Ignoring "noauto" option for root device
	[  +0.207153] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.288223] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[ +11.411364] systemd-fstab-generator[1416]: Ignoring "noauto" option for root device
	[  +0.107418] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.752049] systemd-fstab-generator[1674]: Ignoring "noauto" option for root device
	[  +6.264384] systemd-fstab-generator[1823]: Ignoring "noauto" option for root device
	[  +0.102334] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.637673] kauditd_printk_skb: 67 callbacks suppressed
	[Oct14 14:22] systemd-fstab-generator[2317]: Ignoring "noauto" option for root device
	[  +5.137329] kauditd_printk_skb: 17 callbacks suppressed
	[  +8.104433] kauditd_printk_skb: 29 callbacks suppressed
	[Oct14 14:26] kauditd_printk_skb: 28 callbacks suppressed
	[Oct14 14:42] hrtimer: interrupt took 6612884 ns
	
	
	==> etcd [84fd76d493d8] <==
	{"level":"warn","ts":"2024-10-14T14:42:06.625714Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"233.064235ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T14:42:06.625791Z","caller":"traceutil/trace.go:171","msg":"trace[1441231401] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2578; }","duration":"233.140335ms","start":"2024-10-14T14:42:06.392638Z","end":"2024-10-14T14:42:06.625778Z","steps":["trace[1441231401] 'agreement among raft nodes before linearized reading'  (duration: 233.011936ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:06.838240Z","caller":"traceutil/trace.go:171","msg":"trace[129313711] linearizableReadLoop","detail":"{readStateIndex:2841; appliedIndex:2840; }","duration":"210.836289ms","start":"2024-10-14T14:42:06.627384Z","end":"2024-10-14T14:42:06.838220Z","steps":["trace[129313711] 'read index received'  (duration: 140.43006ms)","trace[129313711] 'applied index is now lower than readState.Index'  (duration: 70.404629ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-14T14:42:06.838564Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.106689ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T14:42:06.838765Z","caller":"traceutil/trace.go:171","msg":"trace[744257141] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2578; }","duration":"211.373088ms","start":"2024-10-14T14:42:06.627381Z","end":"2024-10-14T14:42:06.838754Z","steps":["trace[744257141] 'agreement among raft nodes before linearized reading'  (duration: 210.908189ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:07.474608Z","caller":"traceutil/trace.go:171","msg":"trace[1611887997] linearizableReadLoop","detail":"{readStateIndex:2842; appliedIndex:2841; }","duration":"173.05738ms","start":"2024-10-14T14:42:07.301532Z","end":"2024-10-14T14:42:07.474589Z","steps":["trace[1611887997] 'read index received'  (duration: 172.879381ms)","trace[1611887997] 'applied index is now lower than readState.Index'  (duration: 177.299µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-14T14:42:07.474817Z","caller":"traceutil/trace.go:171","msg":"trace[1935321534] transaction","detail":"{read_only:false; response_revision:2579; number_of_response:1; }","duration":"268.831248ms","start":"2024-10-14T14:42:07.205940Z","end":"2024-10-14T14:42:07.474771Z","steps":["trace[1935321534] 'process raft request'  (duration: 268.413149ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T14:42:07.475266Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.720679ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T14:42:07.475322Z","caller":"traceutil/trace.go:171","msg":"trace[213807606] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2579; }","duration":"173.789679ms","start":"2024-10-14T14:42:07.301524Z","end":"2024-10-14T14:42:07.475314Z","steps":["trace[213807606] 'agreement among raft nodes before linearized reading'  (duration: 173.704979ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:19.018144Z","caller":"traceutil/trace.go:171","msg":"trace[410087756] transaction","detail":"{read_only:false; response_revision:2632; number_of_response:1; }","duration":"175.067373ms","start":"2024-10-14T14:42:18.843055Z","end":"2024-10-14T14:42:19.018123Z","steps":["trace[410087756] 'process raft request'  (duration: 174.514675ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T14:42:19.375954Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.591721ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-132600-m03\" ","response":"range_response_count:1 size:2962"}
	{"level":"info","ts":"2024-10-14T14:42:19.377451Z","caller":"traceutil/trace.go:171","msg":"trace[946791809] range","detail":"{range_begin:/registry/minions/ha-132600-m03; range_end:; response_count:1; response_revision:2632; }","duration":"239.116617ms","start":"2024-10-14T14:42:19.138318Z","end":"2024-10-14T14:42:19.377435Z","steps":["trace[946791809] 'range keys from in-memory index tree'  (duration: 237.374121ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:25.277317Z","caller":"traceutil/trace.go:171","msg":"trace[682053374] linearizableReadLoop","detail":"{readStateIndex:2917; appliedIndex:2916; }","duration":"140.762656ms","start":"2024-10-14T14:42:25.136515Z","end":"2024-10-14T14:42:25.277278Z","steps":["trace[682053374] 'read index received'  (duration: 140.586757ms)","trace[682053374] 'applied index is now lower than readState.Index'  (duration: 175.299µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-14T14:42:25.277542Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.959056ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-132600-m03\" ","response":"range_response_count:1 size:3114"}
	{"level":"info","ts":"2024-10-14T14:42:25.277634Z","caller":"traceutil/trace.go:171","msg":"trace[820348600] range","detail":"{range_begin:/registry/minions/ha-132600-m03; range_end:; response_count:1; response_revision:2648; }","duration":"141.112155ms","start":"2024-10-14T14:42:25.136511Z","end":"2024-10-14T14:42:25.277623Z","steps":["trace[820348600] 'agreement among raft nodes before linearized reading'  (duration: 140.928056ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:25.279675Z","caller":"traceutil/trace.go:171","msg":"trace[858523099] transaction","detail":"{read_only:false; response_revision:2648; number_of_response:1; }","duration":"166.194393ms","start":"2024-10-14T14:42:25.113464Z","end":"2024-10-14T14:42:25.279658Z","steps":["trace[858523099] 'process raft request'  (duration: 163.709399ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:25.921423Z","caller":"traceutil/trace.go:171","msg":"trace[45131034] transaction","detail":"{read_only:false; response_revision:2649; number_of_response:1; }","duration":"240.489012ms","start":"2024-10-14T14:42:25.680919Z","end":"2024-10-14T14:42:25.921408Z","steps":["trace[45131034] 'process raft request'  (duration: 240.146113ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:31.076106Z","caller":"traceutil/trace.go:171","msg":"trace[1663891298] transaction","detail":"{read_only:false; response_revision:2663; number_of_response:1; }","duration":"120.948703ms","start":"2024-10-14T14:42:30.955138Z","end":"2024-10-14T14:42:31.076086Z","steps":["trace[1663891298] 'process raft request'  (duration: 120.791304ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T14:42:31.287943Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.945532ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-132600-m03\" ","response":"range_response_count:1 size:3114"}
	{"level":"info","ts":"2024-10-14T14:42:31.288066Z","caller":"traceutil/trace.go:171","msg":"trace[1149000350] range","detail":"{range_begin:/registry/minions/ha-132600-m03; range_end:; response_count:1; response_revision:2663; }","duration":"150.051932ms","start":"2024-10-14T14:42:31.137978Z","end":"2024-10-14T14:42:31.288030Z","steps":["trace[1149000350] 'range keys from in-memory index tree'  (duration: 149.753933ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:32.313366Z","caller":"traceutil/trace.go:171","msg":"trace[1148264888] linearizableReadLoop","detail":"{readStateIndex:2939; appliedIndex:2938; }","duration":"175.689169ms","start":"2024-10-14T14:42:32.137533Z","end":"2024-10-14T14:42:32.313222Z","steps":["trace[1148264888] 'read index received'  (duration: 175.27797ms)","trace[1148264888] 'applied index is now lower than readState.Index'  (duration: 410.299µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-14T14:42:32.313766Z","caller":"traceutil/trace.go:171","msg":"trace[53442055] transaction","detail":"{read_only:false; response_revision:2667; number_of_response:1; }","duration":"341.166363ms","start":"2024-10-14T14:42:31.972588Z","end":"2024-10-14T14:42:32.313754Z","steps":["trace[53442055] 'process raft request'  (duration: 340.380665ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T14:42:32.314054Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-14T14:42:31.972576Z","time spent":"341.263763ms","remote":"127.0.0.1:55456","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1094,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:2661 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1021 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-14T14:42:32.314975Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.510264ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-132600-m03\" ","response":"range_response_count:1 size:3114"}
	{"level":"info","ts":"2024-10-14T14:42:32.315174Z","caller":"traceutil/trace.go:171","msg":"trace[597120152] range","detail":"{range_begin:/registry/minions/ha-132600-m03; range_end:; response_count:1; response_revision:2667; }","duration":"177.711264ms","start":"2024-10-14T14:42:32.137452Z","end":"2024-10-14T14:42:32.315163Z","steps":["trace[597120152] 'agreement among raft nodes before linearized reading'  (duration: 177.444264ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:43:42 up 23 min,  0 users,  load average: 0.30, 0.44, 0.36
	Linux ha-132600 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [dae2c0aa67af] <==
	I1014 14:42:37.570565       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:42:47.563283       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:42:47.563325       1 main.go:300] handling current node
	I1014 14:42:47.563343       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:42:47.563349       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:42:57.572060       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:42:57.572211       1 main.go:300] handling current node
	I1014 14:42:57.572313       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:42:57.572327       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:43:07.565205       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:43:07.565320       1 main.go:300] handling current node
	I1014 14:43:07.565344       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:43:07.565352       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:43:17.565684       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:43:17.566016       1 main.go:300] handling current node
	I1014 14:43:17.566129       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:43:17.566562       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:43:27.564991       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:43:27.565143       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:43:27.565360       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:43:27.565400       1 main.go:300] handling current node
	I1014 14:43:37.565763       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:43:37.566042       1 main.go:300] handling current node
	I1014 14:43:37.566064       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:43:37.566073       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [b661cb671310] <==
	I1014 14:21:58.925388       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1014 14:21:58.925426       1 policy_source.go:224] refreshing policies
	I1014 14:21:58.928296       1 controller.go:615] quota admission added evaluator for: namespaces
	E1014 14:21:58.950810       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1014 14:21:59.174958       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 14:21:59.819127       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1014 14:21:59.829797       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1014 14:21:59.829835       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 14:22:00.942976       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 14:22:01.015766       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 14:22:01.150423       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1014 14:22:01.164901       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.20.108.120]
	I1014 14:22:01.165818       1 controller.go:615] quota admission added evaluator for: endpoints
	I1014 14:22:01.175247       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 14:22:01.835583       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1014 14:22:03.554010       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1014 14:22:03.589407       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1014 14:22:03.617368       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1014 14:22:07.346752       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1014 14:22:07.516653       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1014 14:38:03.250866       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57382: use of closed network connection
	E1014 14:38:04.469699       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57390: use of closed network connection
	E1014 14:38:05.596658       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57398: use of closed network connection
	E1014 14:38:40.228042       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57419: use of closed network connection
	E1014 14:38:50.673483       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57421: use of closed network connection
	
	
	==> kube-controller-manager [4a8cce31aa3a] <==
	I1014 14:41:59.832379       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600"
	I1014 14:42:14.020113       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-132600-m03\" does not exist"
	I1014 14:42:14.099308       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-132600-m03" podCIDRs=["10.244.1.0/24"]
	I1014 14:42:14.099376       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	E1014 14:42:14.131350       1 range_allocator.go:427] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"ha-132600-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.2.0/24\", \"10.244.1.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-132600-m03" podCIDRs=["10.244.2.0/24"]
	E1014 14:42:14.131490       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"ha-132600-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.2.0/24\", \"10.244.1.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-132600-m03"
	E1014 14:42:14.131543       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'ha-132600-m03': failed to patch node CIDR: Node \"ha-132600-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.2.0/24\", \"10.244.1.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I1014 14:42:14.132334       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:14.137556       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:14.252734       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:14.858650       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:17.081634       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-132600-m03"
	I1014 14:42:17.154621       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:24.267607       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:44.660569       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:46.187605       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-132600-m03"
	I1014 14:42:46.189657       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:46.206592       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:46.221387       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51µs"
	I1014 14:42:46.232286       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51.3µs"
	I1014 14:42:46.252360       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="196µs"
	I1014 14:42:47.108442       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:49.071073       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="17.613156ms"
	I1014 14:42:49.071582       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.8µs"
	I1014 14:43:14.925968       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	
	
	==> kube-proxy [4745a4b0dc37] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1014 14:22:09.024513       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1014 14:22:09.047174       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.20.108.120"]
	E1014 14:22:09.047308       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 14:22:09.124346       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 14:22:09.124494       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 14:22:09.124529       1 server_linux.go:169] "Using iptables Proxier"
	I1014 14:22:09.128809       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 14:22:09.129721       1 server.go:483] "Version info" version="v1.31.1"
	I1014 14:22:09.129805       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 14:22:09.131652       1 config.go:199] "Starting service config controller"
	I1014 14:22:09.131988       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 14:22:09.132269       1 config.go:105] "Starting endpoint slice config controller"
	I1014 14:22:09.132619       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 14:22:09.133592       1 config.go:328] "Starting node config controller"
	I1014 14:22:09.135184       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 14:22:09.232621       1 shared_informer.go:320] Caches are synced for service config
	I1014 14:22:09.233933       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 14:22:09.236054       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [35c870864a80] <==
	W1014 14:21:59.944424       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1014 14:21:59.944452       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:21:59.979078       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1014 14:21:59.979365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.111250       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1014 14:22:00.111664       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.192015       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1014 14:22:00.192346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.196984       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1014 14:22:00.197112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.197208       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1014 14:22:00.197241       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.212172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1014 14:22:00.212214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.260144       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1014 14:22:00.261002       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.276235       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1014 14:22:00.276419       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.295031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1014 14:22:00.295892       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.311664       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1014 14:22:00.313212       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1014 14:22:00.354351       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1014 14:22:00.355204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 14:22:02.117335       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 14 14:39:03 ha-132600 kubelet[2324]: E1014 14:39:03.675462    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:39:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:39:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:39:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:39:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:40:03 ha-132600 kubelet[2324]: E1014 14:40:03.676816    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:40:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:40:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:40:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:40:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:41:03 ha-132600 kubelet[2324]: E1014 14:41:03.675764    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:41:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:41:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:41:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:41:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:42:03 ha-132600 kubelet[2324]: E1014 14:42:03.676366    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:42:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:42:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:42:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:42:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:43:03 ha-132600 kubelet[2324]: E1014 14:43:03.675510    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:43:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:43:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:43:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:43:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-132600 -n ha-132600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-132600 -n ha-132600: (11.7173604s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-132600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-rng7p
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-132600 describe pod busybox-7dff88458-rng7p
helpers_test.go:282: (dbg) kubectl --context ha-132600 describe pod busybox-7dff88458-rng7p:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-rng7p
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9hpj2 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-9hpj2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m22s (x4 over 17m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  59s (x2 over 69s)    default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (272.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (67.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
E1014 07:44:10.845754     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (34.7357687s)
ha_test.go:305: expected profile "ha-132600" in json of 'profile list' to include 4 nodes but have 3 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-132600\",\"Status\":\"OK\",\"Config\":{\"Name\":\"ha-132600\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperv\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\"
:8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-132600\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"172.20.111.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"172.20.108.120\",\"Port\":8443,\"KubernetesVersion\":\
"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"172.20.111.83\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"172.20.111.174\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,
\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"C:\\\\Users\\\\jenkins.minikube1:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOp
timizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-windows-amd64.exe profile list --output json"
ha_test.go:309: expected profile "ha-132600" in json of 'profile list' to have "HAppy" status but have "OK" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-132600\",\"Status\":\"OK\",\"Config\":{\"Name\":\"ha-132600\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperv\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServ
erPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-132600\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"172.20.111.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"172.20.108.120\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"172.20.111.83\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"172.20.111.174\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\
":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"C:\\\\Users\\\\jenkins.minikube1:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"D
isableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-windows-amd64.exe profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-132600 -n ha-132600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-132600 -n ha-132600: (11.7060328s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterClusterStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-132600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-132600 logs -n 25: (8.1142199s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterClusterStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:37 PDT | 14 Oct 24 07:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:37 PDT | 14 Oct 24 07:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-8thz6 --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | busybox-7dff88458-kr92j --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-rng7p --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-8thz6 --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | busybox-7dff88458-kr92j --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-rng7p --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-8thz6 -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | busybox-7dff88458-kr92j -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-rng7p -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-8thz6              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | busybox-7dff88458-kr92j              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-kr92j -- sh        |           |                   |         |                     |                     |
	|         | -c ping -c 1 172.20.96.1             |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-rng7p              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| node    | add -p ha-132600 -v=7                | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:39 PDT | 14 Oct 24 07:42 PDT |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 07:19:00
	Running on machine: minikube1
	Binary: Built with gc go1.23.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 07:19:00.342865   13076 out.go:345] Setting OutFile to fd 1228 ...
	I1014 07:19:00.344546   13076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:19:00.344546   13076 out.go:358] Setting ErrFile to fd 824...
	I1014 07:19:00.344546   13076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:19:00.368247   13076 out.go:352] Setting JSON to false
	I1014 07:19:00.372277   13076 start.go:129] hostinfo: {"hostname":"minikube1","uptime":101054,"bootTime":1728814485,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5011 Build 19045.5011","kernelVersion":"10.0.19045.5011 Build 19045.5011","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1014 07:19:00.372803   13076 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:19:00.379728   13076 out.go:177] * [ha-132600] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	I1014 07:19:00.386820   13076 notify.go:220] Checking for updates...
	I1014 07:19:00.389571   13076 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:19:00.392679   13076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:19:00.395167   13076 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1014 07:19:00.398285   13076 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:19:00.400520   13076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:19:00.404118   13076 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:19:05.654471   13076 out.go:177] * Using the hyperv driver based on user configuration
	I1014 07:19:05.658285   13076 start.go:297] selected driver: hyperv
	I1014 07:19:05.658285   13076 start.go:901] validating driver "hyperv" against <nil>
	I1014 07:19:05.658285   13076 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:19:05.705211   13076 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 07:19:05.706541   13076 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:19:05.706541   13076 cni.go:84] Creating CNI manager for ""
	I1014 07:19:05.706541   13076 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1014 07:19:05.706541   13076 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 07:19:05.707066   13076 start.go:340] cluster config:
	{Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:19:05.707207   13076 iso.go:125] acquiring lock: {Name:mkb71635bcbccba29dce9048ad4cc71430a7e577 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:19:05.711264   13076 out.go:177] * Starting "ha-132600" primary control-plane node in "ha-132600" cluster
	I1014 07:19:05.715097   13076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:19:05.715262   13076 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1014 07:19:05.715344   13076 cache.go:56] Caching tarball of preloaded images
	I1014 07:19:05.715452   13076 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1014 07:19:05.715452   13076 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:19:05.716553   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:19:05.716898   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json: {Name:mkbd11bf3f90adebf1f4630d2e8deaac328748d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:19:05.717886   13076 start.go:360] acquireMachinesLock for ha-132600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:19:05.717886   13076 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-132600"
	I1014 07:19:05.718562   13076 start.go:93] Provisioning new machine with config: &{Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:19:05.718562   13076 start.go:125] createHost starting for "" (driver="hyperv")
	I1014 07:19:05.721891   13076 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:19:05.722555   13076 start.go:159] libmachine.API.Create for "ha-132600" (driver="hyperv")
	I1014 07:19:05.722555   13076 client.go:168] LocalClient.Create starting
	I1014 07:19:05.722555   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I1014 07:19:05.723316   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:19:05.723316   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:19:05.723316   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I1014 07:19:05.723877   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:19:05.723955   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:19:05.723955   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1014 07:19:07.752927   13076 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1014 07:19:07.753576   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:07.753793   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1014 07:19:09.445838   13076 main.go:141] libmachine: [stdout =====>] : False
	
	I1014 07:19:09.445838   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:09.446315   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:19:10.901034   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:19:10.901034   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:10.901034   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:19:14.320078   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:19:14.320309   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:14.323253   13076 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 07:19:14.829443   13076 main.go:141] libmachine: Creating SSH key...
	I1014 07:19:14.988345   13076 main.go:141] libmachine: Creating VM...
	I1014 07:19:14.988345   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:19:17.790932   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:19:17.791005   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:17.791145   13076 main.go:141] libmachine: Using switch "Default Switch"
	I1014 07:19:17.791145   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:19:19.529861   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:19:19.529861   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:19.529861   13076 main.go:141] libmachine: Creating VHD
	I1014 07:19:19.530436   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\fixed.vhd' -SizeBytes 10MB -Fixed
	I1014 07:19:23.105904   13076 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 267BC11D-A195-4636-8AE9-EF1D7966C95C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1014 07:19:23.105987   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:23.106037   13076 main.go:141] libmachine: Writing magic tar header
	I1014 07:19:23.106037   13076 main.go:141] libmachine: Writing SSH key tar header
	I1014 07:19:23.118040   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\disk.vhd' -VHDType Dynamic -DeleteSource
	I1014 07:19:26.201609   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:26.201950   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:26.202023   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\disk.vhd' -SizeBytes 20000MB
	I1014 07:19:28.768368   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:28.768789   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:28.768969   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-132600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1014 07:19:32.226690   13076 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-132600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1014 07:19:32.226915   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:32.226915   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-132600 -DynamicMemoryEnabled $false
	I1014 07:19:34.390731   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:34.390837   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:34.391049   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-132600 -Count 2
	I1014 07:19:36.449382   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:36.449628   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:36.449628   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-132600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\boot2docker.iso'
	I1014 07:19:38.932909   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:38.932909   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:38.932909   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-132600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\disk.vhd'
	I1014 07:19:41.509229   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:41.509229   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:41.509229   13076 main.go:141] libmachine: Starting VM...
	I1014 07:19:41.509229   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-132600
	I1014 07:19:44.718680   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:44.718680   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:44.719475   13076 main.go:141] libmachine: Waiting for host to start...
	I1014 07:19:44.719770   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:19:46.954823   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:19:46.954823   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:46.955829   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:19:49.408227   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:49.408227   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:50.409369   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:19:52.561607   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:19:52.561912   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:52.561912   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:19:55.081136   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:55.081187   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:56.082401   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:19:58.271497   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:19:58.271497   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:58.272384   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:00.740863   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:20:00.741070   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:01.741536   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:03.902654   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:03.902721   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:03.902721   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:06.378064   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:20:06.378064   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:07.379104   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:09.537329   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:09.537329   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:09.537541   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:12.074320   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:12.074320   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:12.074834   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:14.171613   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:14.171613   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:14.171613   13076 machine.go:93] provisionDockerMachine start ...
	I1014 07:20:14.172377   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:16.225265   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:16.225265   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:16.225634   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:18.692061   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:18.692061   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:18.698261   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:18.713495   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:18.713568   13076 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 07:20:18.839311   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 07:20:18.839517   13076 buildroot.go:166] provisioning hostname "ha-132600"
	I1014 07:20:18.839517   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:20.911254   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:20.911424   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:20.911601   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:23.370138   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:23.370756   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:23.377008   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:23.377773   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:23.377773   13076 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-132600 && echo "ha-132600" | sudo tee /etc/hostname
	I1014 07:20:23.537548   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-132600
	
	I1014 07:20:23.537548   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:25.571429   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:25.571429   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:25.572326   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:28.068677   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:28.068677   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:28.077026   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:28.078615   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:28.078615   13076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-132600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-132600/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-132600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 07:20:28.216809   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 07:20:28.216874   13076 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1014 07:20:28.216990   13076 buildroot.go:174] setting up certificates
	I1014 07:20:28.217016   13076 provision.go:84] configureAuth start
	I1014 07:20:28.217047   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:30.246758   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:30.247093   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:30.247368   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:32.688428   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:32.688428   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:32.688428   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:34.748908   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:34.748908   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:34.749349   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:37.221628   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:37.221798   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:37.221798   13076 provision.go:143] copyHostCerts
	I1014 07:20:37.221998   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1014 07:20:37.222444   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1014 07:20:37.222558   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1014 07:20:37.223110   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1014 07:20:37.224109   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1014 07:20:37.224599   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1014 07:20:37.224599   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1014 07:20:37.224959   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I1014 07:20:37.225332   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1014 07:20:37.225332   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1014 07:20:37.225332   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1014 07:20:37.226765   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1014 07:20:37.227733   13076 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-132600 san=[127.0.0.1 172.20.108.120 ha-132600 localhost minikube]
	I1014 07:20:37.731674   13076 provision.go:177] copyRemoteCerts
	I1014 07:20:37.744706   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 07:20:37.744706   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:39.801309   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:39.801309   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:39.801309   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:42.273306   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:42.273367   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:42.273367   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:20:42.378794   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6340227s)
	I1014 07:20:42.378847   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1014 07:20:42.378847   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I1014 07:20:42.424772   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1014 07:20:42.425289   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 07:20:42.471397   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1014 07:20:42.471964   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 07:20:42.527104   13076 provision.go:87] duration metric: took 14.3100395s to configureAuth
	I1014 07:20:42.527104   13076 buildroot.go:189] setting minikube options for container-runtime
	I1014 07:20:42.527712   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:20:42.528252   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:44.598308   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:44.598308   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:44.599305   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:47.040424   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:47.040637   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:47.045726   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:47.046398   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:47.046398   13076 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1014 07:20:47.167710   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1014 07:20:47.167801   13076 buildroot.go:70] root file system type: tmpfs
	I1014 07:20:47.168208   13076 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1014 07:20:47.168315   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:49.202111   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:49.202331   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:49.202424   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:51.648278   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:51.648278   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:51.654248   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:51.655052   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:51.655052   13076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1014 07:20:51.814048   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1014 07:20:51.814229   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:53.871643   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:53.871643   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:53.872030   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:56.286054   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:56.286054   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:56.292357   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:56.293125   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:56.293125   13076 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1014 07:20:58.458028   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1014 07:20:58.458028   13076 machine.go:96] duration metric: took 44.286361s to provisionDockerMachine
	I1014 07:20:58.458028   13076 client.go:171] duration metric: took 1m52.7353371s to LocalClient.Create
	I1014 07:20:58.458028   13076 start.go:167] duration metric: took 1m52.7353371s to libmachine.API.Create "ha-132600"
	I1014 07:20:58.458028   13076 start.go:293] postStartSetup for "ha-132600" (driver="hyperv")
	I1014 07:20:58.458028   13076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 07:20:58.471262   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 07:20:58.471262   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:00.512590   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:00.513162   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:00.513321   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:02.924795   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:02.924929   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:02.925163   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:21:03.032582   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5613141s)
	I1014 07:21:03.044373   13076 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 07:21:03.050831   13076 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 07:21:03.050831   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1014 07:21:03.051271   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1014 07:21:03.052198   13076 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> 9362.pem in /etc/ssl/certs
	I1014 07:21:03.052295   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /etc/ssl/certs/9362.pem
	I1014 07:21:03.063873   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 07:21:03.081968   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /etc/ssl/certs/9362.pem (1708 bytes)
	I1014 07:21:03.128637   13076 start.go:296] duration metric: took 4.6706031s for postStartSetup
	I1014 07:21:03.132031   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:05.196102   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:05.196622   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:05.196696   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:07.624940   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:07.625781   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:07.626103   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:21:07.629046   13076 start.go:128] duration metric: took 2m1.9103356s to createHost
	I1014 07:21:07.629046   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:09.687162   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:09.687420   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:09.687490   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:12.104556   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:12.104556   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:12.110165   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:21:12.110572   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:21:12.110572   13076 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 07:21:12.241524   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728915672.238854784
	
	I1014 07:21:12.241655   13076 fix.go:216] guest clock: 1728915672.238854784
	I1014 07:21:12.241726   13076 fix.go:229] Guest: 2024-10-14 07:21:12.238854784 -0700 PDT Remote: 2024-10-14 07:21:07.6290467 -0700 PDT m=+127.381500801 (delta=4.609808084s)
	I1014 07:21:12.241841   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:14.299338   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:14.299599   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:14.299599   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:16.767477   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:16.767665   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:16.775565   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:21:16.775565   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:21:16.775565   13076 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1728915672
	I1014 07:21:16.921038   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 14 14:21:12 UTC 2024
	
	I1014 07:21:16.921096   13076 fix.go:236] clock set: Mon Oct 14 14:21:12 UTC 2024
	 (err=<nil>)
	I1014 07:21:16.921169   13076 start.go:83] releasing machines lock for "ha-132600", held for 2m11.2030498s
	I1014 07:21:16.921319   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:18.988904   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:18.989898   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:18.989947   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:21.410167   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:21.410403   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:21.414482   13076 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1014 07:21:21.414683   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:21.423844   13076 ssh_runner.go:195] Run: cat /version.json
	I1014 07:21:21.423844   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:23.554001   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:23.554206   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:23.554362   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:23.565882   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:23.565882   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:23.565882   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:26.092249   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:26.092249   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:26.092379   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:21:26.119019   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:26.119086   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:26.119215   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:21:26.179573   13076 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.764971s)
	W1014 07:21:26.179670   13076 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1014 07:21:26.212879   13076 ssh_runner.go:235] Completed: cat /version.json: (4.7890281s)
	I1014 07:21:26.223815   13076 ssh_runner.go:195] Run: systemctl --version
	I1014 07:21:26.243251   13076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 07:21:26.251321   13076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 07:21:26.262431   13076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 07:21:26.290027   13076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 07:21:26.290027   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:21:26.290027   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1014 07:21:26.296743   13076 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1014 07:21:26.296743   13076 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1014 07:21:26.340326   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1014 07:21:26.370362   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1014 07:21:26.391878   13076 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 07:21:26.402847   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 07:21:26.433241   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:21:26.462619   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 07:21:26.491735   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:21:26.520404   13076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 07:21:26.550615   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 07:21:26.580278   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 07:21:26.608564   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 07:21:26.636849   13076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 07:21:26.656448   13076 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 07:21:26.667504   13076 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 07:21:26.700184   13076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 07:21:26.726834   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:26.916163   13076 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 07:21:26.953200   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:21:26.967627   13076 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1014 07:21:27.005080   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:21:27.039193   13076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 07:21:27.083701   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:21:27.118670   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:21:27.153508   13076 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 07:21:27.209689   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:21:27.230765   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:21:27.273213   13076 ssh_runner.go:195] Run: which cri-dockerd
	I1014 07:21:27.291783   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1014 07:21:27.308581   13076 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1014 07:21:27.351709   13076 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1014 07:21:27.539571   13076 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1014 07:21:27.725567   13076 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1014 07:21:27.725567   13076 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1014 07:21:27.768723   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:27.951142   13076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:21:30.486027   13076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5347969s)
	I1014 07:21:30.497722   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1014 07:21:30.531680   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 07:21:30.563742   13076 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1014 07:21:30.767218   13076 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1014 07:21:30.967093   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:31.191192   13076 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1014 07:21:31.229757   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 07:21:31.267426   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:31.462408   13076 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1014 07:21:31.569509   13076 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1014 07:21:31.581151   13076 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1014 07:21:31.591322   13076 start.go:563] Will wait 60s for crictl version
	I1014 07:21:31.604845   13076 ssh_runner.go:195] Run: which crictl
	I1014 07:21:31.622874   13076 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 07:21:31.680621   13076 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1014 07:21:31.689423   13076 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 07:21:31.732594   13076 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 07:21:31.766375   13076 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1014 07:21:31.766375   13076 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:08:65:4d Flags:up|broadcast|multicast|running}
	I1014 07:21:31.773955   13076 ip.go:214] interface addr: fe80::b548:ff79:d464:75b8/64
	I1014 07:21:31.773955   13076 ip.go:214] interface addr: 172.20.96.1/20
	I1014 07:21:31.784171   13076 ssh_runner.go:195] Run: grep 172.20.96.1	host.minikube.internal$ /etc/hosts
	I1014 07:21:31.791192   13076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 07:21:31.825649   13076 kubeadm.go:883] updating cluster {Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 07:21:31.825649   13076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:21:31.834667   13076 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 07:21:31.867707   13076 docker.go:689] Got preloaded images: 
	I1014 07:21:31.867707   13076 docker.go:695] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I1014 07:21:31.880238   13076 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1014 07:21:31.908893   13076 ssh_runner.go:195] Run: which lz4
	I1014 07:21:31.914374   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1014 07:21:31.924715   13076 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 07:21:31.930846   13076 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 07:21:31.931075   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I1014 07:21:33.829320   13076 docker.go:653] duration metric: took 1.9148657s to copy over tarball
	I1014 07:21:33.840382   13076 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 07:21:42.539043   13076 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6986506s)
	I1014 07:21:42.539174   13076 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 07:21:42.611065   13076 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1014 07:21:42.636639   13076 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I1014 07:21:42.681041   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:42.879066   13076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:21:46.161713   13076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.2824935s)
	I1014 07:21:46.173131   13076 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 07:21:46.198645   13076 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1014 07:21:46.198645   13076 cache_images.go:84] Images are preloaded, skipping loading
	I1014 07:21:46.198645   13076 kubeadm.go:934] updating node { 172.20.108.120 8443 v1.31.1 docker true true} ...
	I1014 07:21:46.198645   13076 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-132600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.108.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 07:21:46.209892   13076 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1014 07:21:46.283733   13076 cni.go:84] Creating CNI manager for ""
	I1014 07:21:46.283861   13076 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 07:21:46.283932   13076 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 07:21:46.284000   13076 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.108.120 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-132600 NodeName:ha-132600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.108.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.108.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 07:21:46.284290   13076 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.108.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-132600"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.20.108.120"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.108.120"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 07:21:46.284367   13076 kube-vip.go:115] generating kube-vip config ...
	I1014 07:21:46.295693   13076 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1014 07:21:46.324017   13076 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1014 07:21:46.324230   13076 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.20.111.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1014 07:21:46.334184   13076 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 07:21:46.349321   13076 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 07:21:46.360771   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 07:21:46.379224   13076 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I1014 07:21:46.412080   13076 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 07:21:46.441691   13076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1014 07:21:46.479025   13076 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1014 07:21:46.519461   13076 ssh_runner.go:195] Run: grep 172.20.111.254	control-plane.minikube.internal$ /etc/hosts
	I1014 07:21:46.525021   13076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.111.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 07:21:46.554943   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:46.730466   13076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 07:21:46.761153   13076 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600 for IP: 172.20.108.120
	I1014 07:21:46.761153   13076 certs.go:194] generating shared ca certs ...
	I1014 07:21:46.761153   13076 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:46.761825   13076 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I1014 07:21:46.762680   13076 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I1014 07:21:46.762846   13076 certs.go:256] generating profile certs ...
	I1014 07:21:46.763605   13076 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.key
	I1014 07:21:46.763605   13076 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.crt with IP's: []
	I1014 07:21:46.955865   13076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.crt ...
	I1014 07:21:46.955865   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.crt: {Name:mk4a5e51a4b9d62279c7ef4ef47716bcdf88d704 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:46.957857   13076 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.key ...
	I1014 07:21:46.957857   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.key: {Name:mkc80e99cfe4d8d17198a87fb188cfa1a72d727d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:46.958881   13076 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61
	I1014 07:21:46.958881   13076 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.108.120 172.20.111.254]
	I1014 07:21:47.375660   13076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61 ...
	I1014 07:21:47.375660   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61: {Name:mke858ab0ac103663123708f9814cfdffe03211e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.377216   13076 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61 ...
	I1014 07:21:47.377216   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61: {Name:mk972fca892a193d6b4e84d42e27ed86ce9f0457 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.377811   13076 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt
	I1014 07:21:47.391806   13076 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key
	I1014 07:21:47.392823   13076 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key
	I1014 07:21:47.392823   13076 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt with IP's: []
	I1014 07:21:47.675751   13076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt ...
	I1014 07:21:47.675751   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt: {Name:mk5a9f1239213dd0a0f36b4ee41d5ac56cbd79c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.677918   13076 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key ...
	I1014 07:21:47.677918   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key: {Name:mkb931094180fef1516b0d43acb6430ecbcbb3fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 07:21:47.679943   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 07:21:47.679943   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 07:21:47.679943   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 07:21:47.691415   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 07:21:47.693468   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem (1338 bytes)
	W1014 07:21:47.693468   13076 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936_empty.pem, impossibly tiny 0 bytes
	I1014 07:21:47.694004   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1014 07:21:47.694400   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1014 07:21:47.694671   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1014 07:21:47.695091   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1014 07:21:47.695960   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem (1708 bytes)
	I1014 07:21:47.696145   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem -> /usr/share/ca-certificates/936.pem
	I1014 07:21:47.696401   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /usr/share/ca-certificates/9362.pem
	I1014 07:21:47.696580   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:47.697580   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 07:21:47.742705   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 07:21:47.790988   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 07:21:47.837975   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 07:21:47.884872   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 07:21:47.936878   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 07:21:47.985330   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 07:21:48.029181   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 07:21:48.078824   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem --> /usr/share/ca-certificates/936.pem (1338 bytes)
	I1014 07:21:48.127907   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /usr/share/ca-certificates/9362.pem (1708 bytes)
	I1014 07:21:48.174542   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 07:21:48.218635   13076 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 07:21:48.264009   13076 ssh_runner.go:195] Run: openssl version
	I1014 07:21:48.282939   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 07:21:48.310256   13076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:48.318050   13076 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:44 /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:48.328768   13076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:48.348090   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 07:21:48.376484   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936.pem && ln -fs /usr/share/ca-certificates/936.pem /etc/ssl/certs/936.pem"
	I1014 07:21:48.408494   13076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936.pem
	I1014 07:21:48.416513   13076 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 14:00 /usr/share/ca-certificates/936.pem
	I1014 07:21:48.427815   13076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936.pem
	I1014 07:21:48.447033   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/936.pem /etc/ssl/certs/51391683.0"
	I1014 07:21:48.474293   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9362.pem && ln -fs /usr/share/ca-certificates/9362.pem /etc/ssl/certs/9362.pem"
	I1014 07:21:48.502293   13076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9362.pem
	I1014 07:21:48.509688   13076 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 14:00 /usr/share/ca-certificates/9362.pem
	I1014 07:21:48.519811   13076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9362.pem
	I1014 07:21:48.540403   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9362.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 07:21:48.569007   13076 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 07:21:48.575758   13076 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 07:21:48.576118   13076 kubeadm.go:392] StartCluster: {Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clu
sterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:21:48.584978   13076 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1014 07:21:48.619430   13076 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 07:21:48.647606   13076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 07:21:48.676291   13076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 07:21:48.693679   13076 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 07:21:48.693679   13076 kubeadm.go:157] found existing configuration files:
	
	I1014 07:21:48.703613   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 07:21:48.722067   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 07:21:48.732618   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 07:21:48.759836   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 07:21:48.777622   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 07:21:48.789748   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 07:21:48.818512   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 07:21:48.837396   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 07:21:48.848696   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 07:21:48.876926   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 07:21:48.893530   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 07:21:48.904372   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 07:21:48.920210   13076 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 07:21:49.392185   13076 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 07:22:04.140256   13076 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 07:22:04.140412   13076 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 07:22:04.140501   13076 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 07:22:04.140848   13076 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 07:22:04.141206   13076 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 07:22:04.141311   13076 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 07:22:04.144180   13076 out.go:235]   - Generating certificates and keys ...
	I1014 07:22:04.144373   13076 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 07:22:04.144575   13076 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 07:22:04.144939   13076 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 07:22:04.145172   13076 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1014 07:22:04.145315   13076 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1014 07:22:04.145521   13076 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1014 07:22:04.145845   13076 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1014 07:22:04.146212   13076 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-132600 localhost] and IPs [172.20.108.120 127.0.0.1 ::1]
	I1014 07:22:04.146365   13076 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1014 07:22:04.146614   13076 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-132600 localhost] and IPs [172.20.108.120 127.0.0.1 ::1]
	I1014 07:22:04.147003   13076 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 07:22:04.147200   13076 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 07:22:04.147236   13076 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1014 07:22:04.147236   13076 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 07:22:04.147236   13076 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 07:22:04.147236   13076 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 07:22:04.147954   13076 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 07:22:04.147987   13076 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 07:22:04.147987   13076 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 07:22:04.147987   13076 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 07:22:04.148785   13076 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 07:22:04.151235   13076 out.go:235]   - Booting up control plane ...
	I1014 07:22:04.151235   13076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 07:22:04.151819   13076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 07:22:04.152046   13076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 07:22:04.152333   13076 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 07:22:04.153426   13076 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 523.263843ms
	I1014 07:22:04.153534   13076 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 07:22:04.153534   13076 kubeadm.go:310] [api-check] The API server is healthy after 9.176461865s
	I1014 07:22:04.153534   13076 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 07:22:04.154243   13076 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 07:22:04.154243   13076 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 07:22:04.154779   13076 kubeadm.go:310] [mark-control-plane] Marking the node ha-132600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 07:22:04.154779   13076 kubeadm.go:310] [bootstrap-token] Using token: rm3d8a.inqsemtt5b5l5x2e
	I1014 07:22:04.157926   13076 out.go:235]   - Configuring RBAC rules ...
	I1014 07:22:04.158760   13076 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 07:22:04.158928   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 07:22:04.158928   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 07:22:04.159656   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 07:22:04.159851   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 07:22:04.160089   13076 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 07:22:04.160270   13076 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 07:22:04.160529   13076 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 07:22:04.160529   13076 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 07:22:04.160529   13076 kubeadm.go:310] 
	I1014 07:22:04.160529   13076 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 07:22:04.160529   13076 kubeadm.go:310] 
	I1014 07:22:04.160529   13076 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 07:22:04.160529   13076 kubeadm.go:310] 
	I1014 07:22:04.161100   13076 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 07:22:04.161186   13076 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 07:22:04.161352   13076 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 07:22:04.161352   13076 kubeadm.go:310] 
	I1014 07:22:04.161534   13076 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 07:22:04.161534   13076 kubeadm.go:310] 
	I1014 07:22:04.161706   13076 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 07:22:04.161706   13076 kubeadm.go:310] 
	I1014 07:22:04.161950   13076 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 07:22:04.161950   13076 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 07:22:04.161950   13076 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 07:22:04.161950   13076 kubeadm.go:310] 
	I1014 07:22:04.162515   13076 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 07:22:04.162607   13076 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 07:22:04.162607   13076 kubeadm.go:310] 
	I1014 07:22:04.162607   13076 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rm3d8a.inqsemtt5b5l5x2e \
	I1014 07:22:04.162607   13076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f876f951efbe7f1ed47ca578ee0d33e6ce8fe69c1a008a8154ee00f56ac84e46 \
	I1014 07:22:04.163162   13076 kubeadm.go:310] 	--control-plane 
	I1014 07:22:04.163261   13076 kubeadm.go:310] 
	I1014 07:22:04.163340   13076 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 07:22:04.163340   13076 kubeadm.go:310] 
	I1014 07:22:04.163570   13076 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rm3d8a.inqsemtt5b5l5x2e \
	I1014 07:22:04.163570   13076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f876f951efbe7f1ed47ca578ee0d33e6ce8fe69c1a008a8154ee00f56ac84e46 
	I1014 07:22:04.163570   13076 cni.go:84] Creating CNI manager for ""
	I1014 07:22:04.163570   13076 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 07:22:04.167124   13076 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1014 07:22:04.179134   13076 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 07:22:04.187701   13076 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1014 07:22:04.187701   13076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1014 07:22:04.233053   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1014 07:22:04.912894   13076 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 07:22:04.925574   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-132600 minikube.k8s.io/updated_at=2024_10_14T07_22_04_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=ha-132600 minikube.k8s.io/primary=true
	I1014 07:22:04.927795   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:04.958241   13076 ops.go:34] apiserver oom_adj: -16
	I1014 07:22:05.168034   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:05.667048   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:06.166157   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:06.667561   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:07.167283   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:07.305312   13076 kubeadm.go:1113] duration metric: took 2.3924152s to wait for elevateKubeSystemPrivileges
	I1014 07:22:07.306284   13076 kubeadm.go:394] duration metric: took 18.730142s to StartCluster
	I1014 07:22:07.306284   13076 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:22:07.306284   13076 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:22:07.308077   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:22:07.309777   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 07:22:07.309777   13076 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:22:07.309953   13076 start.go:241] waiting for startup goroutines ...
	I1014 07:22:07.309868   13076 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 07:22:07.310143   13076 addons.go:69] Setting storage-provisioner=true in profile "ha-132600"
	I1014 07:22:07.310235   13076 addons.go:234] Setting addon storage-provisioner=true in "ha-132600"
	I1014 07:22:07.310143   13076 addons.go:69] Setting default-storageclass=true in profile "ha-132600"
	I1014 07:22:07.310402   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:22:07.310402   13076 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-132600"
	I1014 07:22:07.310402   13076 host.go:66] Checking if "ha-132600" exists ...
	I1014 07:22:07.311457   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:07.311926   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:07.495748   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.96.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 07:22:08.130501   13076 start.go:971] {"host.minikube.internal": 172.20.96.1} host record injected into CoreDNS's ConfigMap
	I1014 07:22:09.594501   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:09.594501   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:09.594786   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:09.594878   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:09.595740   13076 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:22:09.596691   13076 kapi.go:59] client config for ha-132600: &rest.Config{Host:"https://172.20.111.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-132600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-132600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2926ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 07:22:09.597939   13076 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:22:09.598267   13076 cert_rotation.go:140] Starting client certificate rotation controller
	I1014 07:22:09.598703   13076 addons.go:234] Setting addon default-storageclass=true in "ha-132600"
	I1014 07:22:09.598956   13076 host.go:66] Checking if "ha-132600" exists ...
	I1014 07:22:09.600143   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:09.600143   13076 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 07:22:09.600247   13076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 07:22:09.600352   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:11.901872   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:11.901872   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:11.902855   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:22:11.913525   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:11.914554   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:11.914725   13076 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 07:22:11.914841   13076 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 07:22:11.915002   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:14.163865   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:14.163865   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:14.163865   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:22:14.597752   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:22:14.597752   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:14.598273   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:22:14.755906   13076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 07:22:16.766331   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:22:16.766798   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:16.767079   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:22:16.907784   13076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 07:22:17.107355   13076 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 07:22:17.107355   13076 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 07:22:17.108322   13076 round_trippers.go:463] GET https://172.20.111.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1014 07:22:17.108322   13076 round_trippers.go:469] Request Headers:
	I1014 07:22:17.108322   13076 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:22:17.108322   13076 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:22:17.120122   13076 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1014 07:22:17.120927   13076 round_trippers.go:463] PUT https://172.20.111.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1014 07:22:17.121006   13076 round_trippers.go:469] Request Headers:
	I1014 07:22:17.121006   13076 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:22:17.121006   13076 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:22:17.121089   13076 round_trippers.go:473]     Content-Type: application/json
	I1014 07:22:17.124876   13076 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 07:22:17.128782   13076 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1014 07:22:17.133792   13076 addons.go:510] duration metric: took 9.8239112s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1014 07:22:17.133792   13076 start.go:246] waiting for cluster config update ...
	I1014 07:22:17.133792   13076 start.go:255] writing updated cluster config ...
	I1014 07:22:17.140793   13076 out.go:201] 
	I1014 07:22:17.150794   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:22:17.151784   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:22:17.157758   13076 out.go:177] * Starting "ha-132600-m02" control-plane node in "ha-132600" cluster
	I1014 07:22:17.160381   13076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:22:17.160381   13076 cache.go:56] Caching tarball of preloaded images
	I1014 07:22:17.160381   13076 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1014 07:22:17.160381   13076 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:22:17.160381   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:22:17.168672   13076 start.go:360] acquireMachinesLock for ha-132600-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:22:17.169194   13076 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-132600-m02"
	I1014 07:22:17.169334   13076 start.go:93] Provisioning new machine with config: &{Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:22:17.169334   13076 start.go:125] createHost starting for "m02" (driver="hyperv")
	I1014 07:22:17.171957   13076 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:22:17.171957   13076 start.go:159] libmachine.API.Create for "ha-132600" (driver="hyperv")
	I1014 07:22:17.171957   13076 client.go:168] LocalClient.Create starting
	I1014 07:22:17.171957   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:22:17.173802   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:22:17.173802   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1014 07:22:19.094383   13076 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1014 07:22:19.094383   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:19.094970   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1014 07:22:20.808796   13076 main.go:141] libmachine: [stdout =====>] : False
	
	I1014 07:22:20.808796   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:20.808897   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:22:22.292959   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:22:22.292959   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:22.293074   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:22:25.811891   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:22:25.811891   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:25.813583   13076 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 07:22:26.334218   13076 main.go:141] libmachine: Creating SSH key...
	I1014 07:22:26.579833   13076 main.go:141] libmachine: Creating VM...
	I1014 07:22:26.580305   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:22:29.402237   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:22:29.402237   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:29.402237   13076 main.go:141] libmachine: Using switch "Default Switch"
	I1014 07:22:29.402237   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:22:31.181025   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:22:31.181025   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:31.181025   13076 main.go:141] libmachine: Creating VHD
	I1014 07:22:31.181025   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I1014 07:22:34.986705   13076 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 9BEBB030-6569-4A5D-A43D-C976E242B24D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1014 07:22:34.986953   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:34.986953   13076 main.go:141] libmachine: Writing magic tar header
	I1014 07:22:34.986953   13076 main.go:141] libmachine: Writing SSH key tar header
	I1014 07:22:34.998864   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I1014 07:22:38.117497   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:38.117497   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:38.117497   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\disk.vhd' -SizeBytes 20000MB
	I1014 07:22:40.750048   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:40.750048   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:40.750048   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-132600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1014 07:22:44.293371   13076 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-132600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1014 07:22:44.294009   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:44.294158   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-132600-m02 -DynamicMemoryEnabled $false
	I1014 07:22:46.495846   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:46.496106   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:46.496211   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-132600-m02 -Count 2
	I1014 07:22:48.637702   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:48.637702   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:48.637702   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-132600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\boot2docker.iso'
	I1014 07:22:51.128606   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:51.128606   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:51.128606   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-132600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\disk.vhd'
	I1014 07:22:53.765202   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:53.765775   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:53.765775   13076 main.go:141] libmachine: Starting VM...
	I1014 07:22:53.765847   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-132600-m02
	I1014 07:22:56.961396   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:56.961396   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:56.962040   13076 main.go:141] libmachine: Waiting for host to start...
	I1014 07:22:56.962040   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:22:59.251592   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:59.251592   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:59.251838   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:01.758067   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:01.758159   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:02.758293   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:04.921490   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:04.921565   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:04.921802   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:07.441595   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:07.441595   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:08.443053   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:10.659721   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:10.659721   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:10.660418   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:13.126618   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:13.126952   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:14.127183   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:16.307414   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:16.307414   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:16.307500   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:18.748902   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:18.749678   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:19.749962   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:21.893323   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:21.893323   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:21.893323   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:24.464351   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:24.465392   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:24.465471   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:26.567626   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:26.567626   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:26.567626   13076 machine.go:93] provisionDockerMachine start ...
	I1014 07:23:26.568400   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:28.681591   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:28.681591   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:28.682618   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:31.178308   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:31.178308   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:31.192309   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:31.207523   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:31.208024   13076 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 07:23:31.337861   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 07:23:31.337861   13076 buildroot.go:166] provisioning hostname "ha-132600-m02"
	I1014 07:23:31.337941   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:33.402839   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:33.402839   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:33.403340   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:35.894312   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:35.894312   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:35.901210   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:35.901699   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:35.901823   13076 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-132600-m02 && echo "ha-132600-m02" | sudo tee /etc/hostname
	I1014 07:23:36.071040   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-132600-m02
	
	I1014 07:23:36.071040   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:38.150707   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:38.151729   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:38.151729   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:40.639242   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:40.639242   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:40.644598   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:40.645419   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:40.645490   13076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-132600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-132600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-132600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 07:23:40.801974   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 07:23:40.801974   13076 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1014 07:23:40.801974   13076 buildroot.go:174] setting up certificates
	I1014 07:23:40.801974   13076 provision.go:84] configureAuth start
	I1014 07:23:40.801974   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:42.869101   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:42.869101   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:42.869101   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:45.382211   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:45.382688   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:45.382688   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:47.501182   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:47.501778   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:47.501832   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:50.008838   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:50.008838   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:50.009193   13076 provision.go:143] copyHostCerts
	I1014 07:23:50.009346   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1014 07:23:50.009346   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1014 07:23:50.009346   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1014 07:23:50.010015   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1014 07:23:50.011395   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1014 07:23:50.011967   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1014 07:23:50.011967   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1014 07:23:50.011967   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1014 07:23:50.013212   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1014 07:23:50.014109   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1014 07:23:50.014109   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1014 07:23:50.014507   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I1014 07:23:50.015649   13076 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-132600-m02 san=[127.0.0.1 172.20.111.83 ha-132600-m02 localhost minikube]
	I1014 07:23:50.130152   13076 provision.go:177] copyRemoteCerts
	I1014 07:23:50.140319   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 07:23:50.140319   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:52.242533   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:52.242533   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:52.243067   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:54.757999   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:54.758132   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:54.758132   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:23:54.867658   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7273333s)
	I1014 07:23:54.867658   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1014 07:23:54.867658   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 07:23:54.915126   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1014 07:23:54.915808   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 07:23:54.962775   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1014 07:23:54.963449   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 07:23:55.012733   13076 provision.go:87] duration metric: took 14.2107408s to configureAuth
	I1014 07:23:55.012733   13076 buildroot.go:189] setting minikube options for container-runtime
	I1014 07:23:55.013734   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:23:55.013734   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:57.174688   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:57.174688   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:57.175736   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:59.668037   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:59.668597   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:59.675074   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:59.675608   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:59.675873   13076 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1014 07:23:59.826626   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1014 07:23:59.826626   13076 buildroot.go:70] root file system type: tmpfs
	I1014 07:23:59.826926   13076 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1014 07:23:59.827038   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:01.963890   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:01.964568   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:01.964568   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:04.515125   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:04.515943   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:04.521824   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:04.522248   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:04.522248   13076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.20.108.120"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1014 07:24:04.691427   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.20.108.120
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1014 07:24:04.692090   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:06.819823   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:06.820339   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:06.820339   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:09.321630   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:09.321630   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:09.326748   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:09.327748   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:09.327808   13076 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1014 07:24:11.561493   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1014 07:24:11.561493   13076 machine.go:96] duration metric: took 44.993809s to provisionDockerMachine
	I1014 07:24:11.561493   13076 client.go:171] duration metric: took 1m54.3893961s to LocalClient.Create
	I1014 07:24:11.561493   13076 start.go:167] duration metric: took 1m54.3893961s to libmachine.API.Create "ha-132600"
	I1014 07:24:11.561493   13076 start.go:293] postStartSetup for "ha-132600-m02" (driver="hyperv")
	I1014 07:24:11.561493   13076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 07:24:11.572490   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 07:24:11.572490   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:13.652544   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:13.653401   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:13.653599   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:16.188638   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:16.188638   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:16.189304   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:24:16.298812   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7262867s)
	I1014 07:24:16.310231   13076 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 07:24:16.316927   13076 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 07:24:16.316927   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1014 07:24:16.317465   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1014 07:24:16.318548   13076 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> 9362.pem in /etc/ssl/certs
	I1014 07:24:16.318548   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /etc/ssl/certs/9362.pem
	I1014 07:24:16.332285   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 07:24:16.351469   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /etc/ssl/certs/9362.pem (1708 bytes)
	I1014 07:24:16.401820   13076 start.go:296] duration metric: took 4.8403207s for postStartSetup
	I1014 07:24:16.404096   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:18.500218   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:18.501234   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:18.501234   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:20.974982   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:20.975125   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:20.975125   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:24:20.978124   13076 start.go:128] duration metric: took 2m3.8086365s to createHost
	I1014 07:24:20.978209   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:23.112937   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:23.113001   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:23.113001   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:25.636675   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:25.637206   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:25.641953   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:25.642670   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:25.642670   13076 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 07:24:25.768945   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728915865.766702127
	
	I1014 07:24:25.769174   13076 fix.go:216] guest clock: 1728915865.766702127
	I1014 07:24:25.769174   13076 fix.go:229] Guest: 2024-10-14 07:24:25.766702127 -0700 PDT Remote: 2024-10-14 07:24:20.978124 -0700 PDT m=+320.730336401 (delta=4.788578127s)
	I1014 07:24:25.769174   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:27.845838   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:27.845838   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:27.845838   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:30.364175   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:30.364263   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:30.369382   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:30.369967   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:30.370049   13076 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1728915865
	I1014 07:24:30.508807   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 14 14:24:25 UTC 2024
	
	I1014 07:24:30.508807   13076 fix.go:236] clock set: Mon Oct 14 14:24:25 UTC 2024
	 (err=<nil>)
	I1014 07:24:30.508807   13076 start.go:83] releasing machines lock for "ha-132600-m02", held for 2m13.3394475s
	I1014 07:24:30.508807   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:32.570516   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:32.570516   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:32.570516   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:35.081338   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:35.081338   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:35.084330   13076 out.go:177] * Found network options:
	I1014 07:24:35.087197   13076 out.go:177]   - NO_PROXY=172.20.108.120
	W1014 07:24:35.089531   13076 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 07:24:35.091879   13076 out.go:177]   - NO_PROXY=172.20.108.120
	W1014 07:24:35.094409   13076 proxy.go:119] fail to check proxy env: Error ip not in block
	W1014 07:24:35.095622   13076 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 07:24:35.098960   13076 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1014 07:24:35.099124   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:35.109488   13076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1014 07:24:35.109488   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:37.293745   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:37.293818   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:37.293870   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:37.344951   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:37.344951   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:37.344951   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:39.933767   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:39.934139   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:39.934445   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:24:39.999458   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:39.999458   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:39.999458   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:24:40.039802   13076 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.930308s)
	W1014 07:24:40.039802   13076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 07:24:40.051889   13076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 07:24:40.057601   13076 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9585844s)
	W1014 07:24:40.057601   13076 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1014 07:24:40.090774   13076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 07:24:40.090774   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:24:40.091315   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:24:40.139757   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1014 07:24:40.172033   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1014 07:24:40.178804   13076 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1014 07:24:40.178804   13076 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1014 07:24:40.194040   13076 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 07:24:40.209169   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 07:24:40.241042   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:24:40.273436   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 07:24:40.304842   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:24:40.336705   13076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 07:24:40.371635   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 07:24:40.404378   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 07:24:40.435805   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 07:24:40.485319   13076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 07:24:40.505473   13076 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 07:24:40.516975   13076 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 07:24:40.550905   13076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 07:24:40.581632   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:24:40.785773   13076 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 07:24:40.819978   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:24:40.830992   13076 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1014 07:24:40.876801   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:24:40.913930   13076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 07:24:40.966444   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:24:41.006432   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:24:41.044531   13076 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 07:24:41.114617   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:24:41.139995   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:24:41.187779   13076 ssh_runner.go:195] Run: which cri-dockerd
	I1014 07:24:41.207367   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1014 07:24:41.226988   13076 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1014 07:24:41.272924   13076 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1014 07:24:41.470204   13076 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1014 07:24:41.650925   13076 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1014 07:24:41.650925   13076 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1014 07:24:41.694417   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:24:41.893295   13076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:25:43.008408   13076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.115034s)
	I1014 07:25:43.020423   13076 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1014 07:25:43.059257   13076 out.go:201] 
	W1014 07:25:43.062124   13076 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 14 14:24:09 ha-132600-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.938489944Z" level=info msg="Starting up"
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.941531731Z" level=info msg="containerd not running, starting managed containerd"
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.942638427Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	Oct 14 14:24:09 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:09.976195986Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.003978070Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004049170Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004119469Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004233069Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004318268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004457268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004664767Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004761167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004782067Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004795467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004887666Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.005331964Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008453752Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008545052Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008673551Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008759151Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008859951Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008988550Z" level=info msg="metadata content store policy set" policy=shared
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034095951Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034335550Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034407650Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034432250Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034449450Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034603849Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034945948Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035129847Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035263947Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035294447Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035312846Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035328546Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035343646Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035419846Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035443346Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035495946Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035526946Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035542746Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035565345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035583545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035598745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035630745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035648245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035663945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035689645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035709545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035731545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035750545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035764245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035777845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035792145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035809345Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035833944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035849444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035864444Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036025444Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036057644Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036074443Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036087843Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036099043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036115343Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036131843Z" level=info msg="NRI interface is disabled by configuration."
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036482842Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036737141Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036799541Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036822041Z" level=info msg="containerd successfully booted in 0.062283s"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.009541514Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.046546477Z" level=info msg="Loading containers: start."
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.205429391Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.426461627Z" level=info msg="Loading containers: done."
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448023414Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448125314Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448200613Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448511212Z" level=info msg="Daemon has completed initialization"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.556153647Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.556338446Z" level=info msg="API listen on [::]:2376"
	Oct 14 14:24:11 ha-132600-m02 systemd[1]: Started Docker Application Container Engine.
	Oct 14 14:24:41 ha-132600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.916201367Z" level=info msg="Processing signal 'terminated'"
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.918697964Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919058063Z" level=info msg="Daemon shutdown complete"
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919215663Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919252063Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: docker.service: Deactivated successfully.
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 14 14:24:42 ha-132600-m02 dockerd[1075]: time="2024-10-14T14:24:42.978494717Z" level=info msg="Starting up"
	Oct 14 14:25:43 ha-132600-m02 dockerd[1075]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1014 07:25:43.062124   13076 out.go:270] * 
	W1014 07:25:43.062827   13076 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:25:43.067027   13076 out.go:201] 
	
	
	==> Docker <==
	Oct 14 14:22:32 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:22:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d7137fb8ec569089599a03e11d18eda509d210d29fa54b5e6a0a8d7dd7a54e7f/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 14:22:32 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:22:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/90ffeb0ce1aafcb639832f7144bd2dbb0b5c9deb53555b49e386b3d3e96d8bc3/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 14:22:32 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:22:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7278ecf7cc9466e1fee2170d699055f6498d2ece8da31c4b3696ed85b3cd38cf/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922383735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922463635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922481235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922580735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.996609949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.996807948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.996907948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.998275745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.069443031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.070022429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.070369527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.076841098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675114485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675249284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675264584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675974482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:17 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:26:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/91751edf004eb5236aa2443a5cff30bdc82ba05d39a3583d9026a4f19faba52f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 14 14:26:19 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:26:19Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.456978196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.457173796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.457616296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.457980197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2b8d44d586369       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Running             busybox                   0                   91751edf004eb       busybox-7dff88458-kr92j
	5a6196684fc6e       c69fa2e9cbf5f                                                                                         22 minutes ago      Running             coredns                   0                   7278ecf7cc946       coredns-7c65d6cfc9-4qfrq
	81d6fdac8115f       6e38f40d628db                                                                                         22 minutes ago      Running             storage-provisioner       0                   90ffeb0ce1aaf       storage-provisioner
	52c3e5370a6c8       c69fa2e9cbf5f                                                                                         22 minutes ago      Running             coredns                   0                   d7137fb8ec569       coredns-7c65d6cfc9-zf6cd
	dae2c0aa67af3       kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387              22 minutes ago      Running             kindnet-cni               0                   277e64b59b8aa       kindnet-rkjqr
	4745a4b0dc379       60c005f310ff3                                                                                         22 minutes ago      Running             kube-proxy                0                   15f605d31df67       kube-proxy-zkbj8
	b08f7a3c2a5a6       ghcr.io/kube-vip/kube-vip@sha256:9e23baad11ae3e69d739430b9fdb60df22356b7da4b4f4e458fae0541619deb4     22 minutes ago      Running             kube-vip                  0                   46cfd7b278d40       kube-vip-ha-132600
	84fd76d493d80       2e96e5913fc06                                                                                         22 minutes ago      Running             etcd                      0                   d71566d00e2f9       etcd-ha-132600
	b661cb6713103       6bab7719df100                                                                                         22 minutes ago      Running             kube-apiserver            0                   cb18be6165830       kube-apiserver-ha-132600
	35c870864a800       9aa1fad941575                                                                                         22 minutes ago      Running             kube-scheduler            0                   36593363b631a       kube-scheduler-ha-132600
	4a8cce31aa3a2       175ffd71cce3d                                                                                         22 minutes ago      Running             kube-controller-manager   0                   f5fa69fe68b07       kube-controller-manager-ha-132600
	
	
	==> coredns [52c3e5370a6c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41885 - 43638 "HINFO IN 5147527986633681541.7511835962423692488. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.048565283s
	[INFO] 10.244.0.4:49801 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0004036s
	[INFO] 10.244.0.4:57121 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.178873632s
	[INFO] 10.244.0.4:54985 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000301s
	[INFO] 10.244.0.4:47603 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000273799s
	[INFO] 10.244.0.4:50030 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000276399s
	[INFO] 10.244.0.4:38124 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000439399s
	[INFO] 10.244.0.4:43764 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000347699s
	[INFO] 10.244.0.4:53583 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0003446s
	[INFO] 10.244.0.4:50771 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0002502s
	[INFO] 10.244.0.4:32805 - 5 "PTR IN 1.96.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0002399s
	
	
	==> coredns [5a6196684fc6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50464 - 22593 "HINFO IN 8090798371432773748.7872157757733337044. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.084180423s
	[INFO] 10.244.0.4:46373 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.004105792s
	[INFO] 10.244.0.4:41175 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.166814894s
	[INFO] 10.244.0.4:43040 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002867s
	[INFO] 10.244.0.4:33549 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.045685805s
	[INFO] 10.244.0.4:60793 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000884598s
	[INFO] 10.244.0.4:57558 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001339s
	[INFO] 10.244.0.4:33138 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.040147217s
	[INFO] 10.244.0.4:46303 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0002673s
	[INFO] 10.244.0.4:60761 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001771s
	[INFO] 10.244.0.4:55567 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000249599s
	
	
	==> describe nodes <==
	Name:               ha-132600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-132600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=ha-132600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_14T07_22_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 14:22:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-132600
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:44:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 14:41:59 +0000   Mon, 14 Oct 2024 14:22:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 14:41:59 +0000   Mon, 14 Oct 2024 14:22:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 14:41:59 +0000   Mon, 14 Oct 2024 14:22:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 14:41:59 +0000   Mon, 14 Oct 2024 14:22:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.108.120
	  Hostname:    ha-132600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 07bcc58a0c7e4f0eaae96253dcf0a4ae
	  System UUID:                e06d00fc-5ebc-1a49-bf31-4c6ce500fe9c
	  Boot ID:                    215d765d-07e3-4bb7-ba3c-58ab142d5afa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kr92j              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 coredns-7c65d6cfc9-4qfrq             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 coredns-7c65d6cfc9-zf6cd             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 etcd-ha-132600                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kindnet-rkjqr                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      22m
	  kube-system                 kube-apiserver-ha-132600             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-ha-132600    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-zkbj8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-ha-132600             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-vip-ha-132600                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 22m   kube-proxy       
	  Normal  Starting                 22m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m   kubelet          Node ha-132600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m   kubelet          Node ha-132600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m   kubelet          Node ha-132600 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           22m   node-controller  Node ha-132600 event: Registered Node ha-132600 in Controller
	  Normal  NodeReady                22m   kubelet          Node ha-132600 status is now: NodeReady
	
	
	Name:               ha-132600-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-132600-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=ha-132600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_14T07_42_14_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 14:42:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-132600-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:44:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 14:43:14 +0000   Mon, 14 Oct 2024 14:42:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 14:43:14 +0000   Mon, 14 Oct 2024 14:42:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 14:43:14 +0000   Mon, 14 Oct 2024 14:42:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 14:43:14 +0000   Mon, 14 Oct 2024 14:42:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.111.174
	  Hostname:    ha-132600-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 3833a05e76bc4842ac73a1d2223670ec
	  System UUID:                f1d27ade-116b-0642-ac95-727d62870b2a
	  Boot ID:                    07765716-ab52-4f7d-8d78-8fbd996c8cc0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8thz6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 kindnet-dznf8              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      2m36s
	  kube-system                 kube-proxy-q6wxd           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m23s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m37s (x2 over 2m37s)  kubelet          Node ha-132600-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     2m36s                  cidrAllocator    Node ha-132600-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasNoDiskPressure    2m36s (x2 over 2m37s)  kubelet          Node ha-132600-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m36s (x2 over 2m37s)  kubelet          Node ha-132600-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m33s                  node-controller  Node ha-132600-m03 event: Registered Node ha-132600-m03 in Controller
	  Normal  NodeReady                2m4s                   kubelet          Node ha-132600-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.050916] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +45.956887] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.179434] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[Oct14 14:21] systemd-fstab-generator[997]: Ignoring "noauto" option for root device
	[  +0.095957] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.527367] systemd-fstab-generator[1037]: Ignoring "noauto" option for root device
	[  +0.185829] systemd-fstab-generator[1049]: Ignoring "noauto" option for root device
	[  +0.220100] systemd-fstab-generator[1063]: Ignoring "noauto" option for root device
	[  +2.802946] systemd-fstab-generator[1277]: Ignoring "noauto" option for root device
	[  +0.209648] systemd-fstab-generator[1289]: Ignoring "noauto" option for root device
	[  +0.207153] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.288223] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[ +11.411364] systemd-fstab-generator[1416]: Ignoring "noauto" option for root device
	[  +0.107418] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.752049] systemd-fstab-generator[1674]: Ignoring "noauto" option for root device
	[  +6.264384] systemd-fstab-generator[1823]: Ignoring "noauto" option for root device
	[  +0.102334] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.637673] kauditd_printk_skb: 67 callbacks suppressed
	[Oct14 14:22] systemd-fstab-generator[2317]: Ignoring "noauto" option for root device
	[  +5.137329] kauditd_printk_skb: 17 callbacks suppressed
	[  +8.104433] kauditd_printk_skb: 29 callbacks suppressed
	[Oct14 14:26] kauditd_printk_skb: 28 callbacks suppressed
	[Oct14 14:42] hrtimer: interrupt took 6612884 ns
	
	
	==> etcd [84fd76d493d8] <==
	{"level":"warn","ts":"2024-10-14T14:42:06.625714Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"233.064235ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T14:42:06.625791Z","caller":"traceutil/trace.go:171","msg":"trace[1441231401] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2578; }","duration":"233.140335ms","start":"2024-10-14T14:42:06.392638Z","end":"2024-10-14T14:42:06.625778Z","steps":["trace[1441231401] 'agreement among raft nodes before linearized reading'  (duration: 233.011936ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:06.838240Z","caller":"traceutil/trace.go:171","msg":"trace[129313711] linearizableReadLoop","detail":"{readStateIndex:2841; appliedIndex:2840; }","duration":"210.836289ms","start":"2024-10-14T14:42:06.627384Z","end":"2024-10-14T14:42:06.838220Z","steps":["trace[129313711] 'read index received'  (duration: 140.43006ms)","trace[129313711] 'applied index is now lower than readState.Index'  (duration: 70.404629ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-14T14:42:06.838564Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.106689ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T14:42:06.838765Z","caller":"traceutil/trace.go:171","msg":"trace[744257141] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2578; }","duration":"211.373088ms","start":"2024-10-14T14:42:06.627381Z","end":"2024-10-14T14:42:06.838754Z","steps":["trace[744257141] 'agreement among raft nodes before linearized reading'  (duration: 210.908189ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:07.474608Z","caller":"traceutil/trace.go:171","msg":"trace[1611887997] linearizableReadLoop","detail":"{readStateIndex:2842; appliedIndex:2841; }","duration":"173.05738ms","start":"2024-10-14T14:42:07.301532Z","end":"2024-10-14T14:42:07.474589Z","steps":["trace[1611887997] 'read index received'  (duration: 172.879381ms)","trace[1611887997] 'applied index is now lower than readState.Index'  (duration: 177.299µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-14T14:42:07.474817Z","caller":"traceutil/trace.go:171","msg":"trace[1935321534] transaction","detail":"{read_only:false; response_revision:2579; number_of_response:1; }","duration":"268.831248ms","start":"2024-10-14T14:42:07.205940Z","end":"2024-10-14T14:42:07.474771Z","steps":["trace[1935321534] 'process raft request'  (duration: 268.413149ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T14:42:07.475266Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.720679ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T14:42:07.475322Z","caller":"traceutil/trace.go:171","msg":"trace[213807606] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2579; }","duration":"173.789679ms","start":"2024-10-14T14:42:07.301524Z","end":"2024-10-14T14:42:07.475314Z","steps":["trace[213807606] 'agreement among raft nodes before linearized reading'  (duration: 173.704979ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:19.018144Z","caller":"traceutil/trace.go:171","msg":"trace[410087756] transaction","detail":"{read_only:false; response_revision:2632; number_of_response:1; }","duration":"175.067373ms","start":"2024-10-14T14:42:18.843055Z","end":"2024-10-14T14:42:19.018123Z","steps":["trace[410087756] 'process raft request'  (duration: 174.514675ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T14:42:19.375954Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.591721ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-132600-m03\" ","response":"range_response_count:1 size:2962"}
	{"level":"info","ts":"2024-10-14T14:42:19.377451Z","caller":"traceutil/trace.go:171","msg":"trace[946791809] range","detail":"{range_begin:/registry/minions/ha-132600-m03; range_end:; response_count:1; response_revision:2632; }","duration":"239.116617ms","start":"2024-10-14T14:42:19.138318Z","end":"2024-10-14T14:42:19.377435Z","steps":["trace[946791809] 'range keys from in-memory index tree'  (duration: 237.374121ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:25.277317Z","caller":"traceutil/trace.go:171","msg":"trace[682053374] linearizableReadLoop","detail":"{readStateIndex:2917; appliedIndex:2916; }","duration":"140.762656ms","start":"2024-10-14T14:42:25.136515Z","end":"2024-10-14T14:42:25.277278Z","steps":["trace[682053374] 'read index received'  (duration: 140.586757ms)","trace[682053374] 'applied index is now lower than readState.Index'  (duration: 175.299µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-14T14:42:25.277542Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.959056ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-132600-m03\" ","response":"range_response_count:1 size:3114"}
	{"level":"info","ts":"2024-10-14T14:42:25.277634Z","caller":"traceutil/trace.go:171","msg":"trace[820348600] range","detail":"{range_begin:/registry/minions/ha-132600-m03; range_end:; response_count:1; response_revision:2648; }","duration":"141.112155ms","start":"2024-10-14T14:42:25.136511Z","end":"2024-10-14T14:42:25.277623Z","steps":["trace[820348600] 'agreement among raft nodes before linearized reading'  (duration: 140.928056ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:25.279675Z","caller":"traceutil/trace.go:171","msg":"trace[858523099] transaction","detail":"{read_only:false; response_revision:2648; number_of_response:1; }","duration":"166.194393ms","start":"2024-10-14T14:42:25.113464Z","end":"2024-10-14T14:42:25.279658Z","steps":["trace[858523099] 'process raft request'  (duration: 163.709399ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:25.921423Z","caller":"traceutil/trace.go:171","msg":"trace[45131034] transaction","detail":"{read_only:false; response_revision:2649; number_of_response:1; }","duration":"240.489012ms","start":"2024-10-14T14:42:25.680919Z","end":"2024-10-14T14:42:25.921408Z","steps":["trace[45131034] 'process raft request'  (duration: 240.146113ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:31.076106Z","caller":"traceutil/trace.go:171","msg":"trace[1663891298] transaction","detail":"{read_only:false; response_revision:2663; number_of_response:1; }","duration":"120.948703ms","start":"2024-10-14T14:42:30.955138Z","end":"2024-10-14T14:42:31.076086Z","steps":["trace[1663891298] 'process raft request'  (duration: 120.791304ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T14:42:31.287943Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.945532ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-132600-m03\" ","response":"range_response_count:1 size:3114"}
	{"level":"info","ts":"2024-10-14T14:42:31.288066Z","caller":"traceutil/trace.go:171","msg":"trace[1149000350] range","detail":"{range_begin:/registry/minions/ha-132600-m03; range_end:; response_count:1; response_revision:2663; }","duration":"150.051932ms","start":"2024-10-14T14:42:31.137978Z","end":"2024-10-14T14:42:31.288030Z","steps":["trace[1149000350] 'range keys from in-memory index tree'  (duration: 149.753933ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:32.313366Z","caller":"traceutil/trace.go:171","msg":"trace[1148264888] linearizableReadLoop","detail":"{readStateIndex:2939; appliedIndex:2938; }","duration":"175.689169ms","start":"2024-10-14T14:42:32.137533Z","end":"2024-10-14T14:42:32.313222Z","steps":["trace[1148264888] 'read index received'  (duration: 175.27797ms)","trace[1148264888] 'applied index is now lower than readState.Index'  (duration: 410.299µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-14T14:42:32.313766Z","caller":"traceutil/trace.go:171","msg":"trace[53442055] transaction","detail":"{read_only:false; response_revision:2667; number_of_response:1; }","duration":"341.166363ms","start":"2024-10-14T14:42:31.972588Z","end":"2024-10-14T14:42:32.313754Z","steps":["trace[53442055] 'process raft request'  (duration: 340.380665ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T14:42:32.314054Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-14T14:42:31.972576Z","time spent":"341.263763ms","remote":"127.0.0.1:55456","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1094,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:2661 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1021 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-14T14:42:32.314975Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.510264ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-132600-m03\" ","response":"range_response_count:1 size:3114"}
	{"level":"info","ts":"2024-10-14T14:42:32.315174Z","caller":"traceutil/trace.go:171","msg":"trace[597120152] range","detail":"{range_begin:/registry/minions/ha-132600-m03; range_end:; response_count:1; response_revision:2667; }","duration":"177.711264ms","start":"2024-10-14T14:42:32.137452Z","end":"2024-10-14T14:42:32.315163Z","steps":["trace[597120152] 'agreement among raft nodes before linearized reading'  (duration: 177.444264ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:44:50 up 24 min,  0 users,  load average: 0.17, 0.37, 0.34
	Linux ha-132600 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [dae2c0aa67af] <==
	I1014 14:43:47.563646       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:43:57.564816       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:43:57.564997       1 main.go:300] handling current node
	I1014 14:43:57.565018       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:43:57.565026       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:44:07.563044       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:44:07.563154       1 main.go:300] handling current node
	I1014 14:44:07.563183       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:44:07.563205       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:44:17.563789       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:44:17.564104       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:44:17.564758       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:44:17.564813       1 main.go:300] handling current node
	I1014 14:44:27.565240       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:44:27.565426       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:44:27.565721       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:44:27.565753       1 main.go:300] handling current node
	I1014 14:44:37.565340       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:44:37.565578       1 main.go:300] handling current node
	I1014 14:44:37.565598       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:44:37.565605       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:44:47.563490       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:44:47.563929       1 main.go:300] handling current node
	I1014 14:44:47.564015       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:44:47.564107       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [b661cb671310] <==
	I1014 14:21:58.925388       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1014 14:21:58.925426       1 policy_source.go:224] refreshing policies
	I1014 14:21:58.928296       1 controller.go:615] quota admission added evaluator for: namespaces
	E1014 14:21:58.950810       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1014 14:21:59.174958       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 14:21:59.819127       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1014 14:21:59.829797       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1014 14:21:59.829835       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 14:22:00.942976       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 14:22:01.015766       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 14:22:01.150423       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1014 14:22:01.164901       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.20.108.120]
	I1014 14:22:01.165818       1 controller.go:615] quota admission added evaluator for: endpoints
	I1014 14:22:01.175247       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 14:22:01.835583       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1014 14:22:03.554010       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1014 14:22:03.589407       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1014 14:22:03.617368       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1014 14:22:07.346752       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1014 14:22:07.516653       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1014 14:38:03.250866       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57382: use of closed network connection
	E1014 14:38:04.469699       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57390: use of closed network connection
	E1014 14:38:05.596658       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57398: use of closed network connection
	E1014 14:38:40.228042       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57419: use of closed network connection
	E1014 14:38:50.673483       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57421: use of closed network connection
	
	
	==> kube-controller-manager [4a8cce31aa3a] <==
	I1014 14:41:59.832379       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600"
	I1014 14:42:14.020113       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-132600-m03\" does not exist"
	I1014 14:42:14.099308       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-132600-m03" podCIDRs=["10.244.1.0/24"]
	I1014 14:42:14.099376       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	E1014 14:42:14.131350       1 range_allocator.go:427] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"ha-132600-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.2.0/24\", \"10.244.1.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-132600-m03" podCIDRs=["10.244.2.0/24"]
	E1014 14:42:14.131490       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"ha-132600-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.2.0/24\", \"10.244.1.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-132600-m03"
	E1014 14:42:14.131543       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'ha-132600-m03': failed to patch node CIDR: Node \"ha-132600-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.2.0/24\", \"10.244.1.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I1014 14:42:14.132334       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:14.137556       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:14.252734       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:14.858650       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:17.081634       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-132600-m03"
	I1014 14:42:17.154621       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:24.267607       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:44.660569       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:46.187605       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-132600-m03"
	I1014 14:42:46.189657       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:46.206592       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:46.221387       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51µs"
	I1014 14:42:46.232286       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51.3µs"
	I1014 14:42:46.252360       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="196µs"
	I1014 14:42:47.108442       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:49.071073       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="17.613156ms"
	I1014 14:42:49.071582       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.8µs"
	I1014 14:43:14.925968       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	
	
	==> kube-proxy [4745a4b0dc37] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1014 14:22:09.024513       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1014 14:22:09.047174       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.20.108.120"]
	E1014 14:22:09.047308       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 14:22:09.124346       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 14:22:09.124494       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 14:22:09.124529       1 server_linux.go:169] "Using iptables Proxier"
	I1014 14:22:09.128809       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 14:22:09.129721       1 server.go:483] "Version info" version="v1.31.1"
	I1014 14:22:09.129805       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 14:22:09.131652       1 config.go:199] "Starting service config controller"
	I1014 14:22:09.131988       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 14:22:09.132269       1 config.go:105] "Starting endpoint slice config controller"
	I1014 14:22:09.132619       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 14:22:09.133592       1 config.go:328] "Starting node config controller"
	I1014 14:22:09.135184       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 14:22:09.232621       1 shared_informer.go:320] Caches are synced for service config
	I1014 14:22:09.233933       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 14:22:09.236054       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [35c870864a80] <==
	W1014 14:21:59.944424       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1014 14:21:59.944452       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:21:59.979078       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1014 14:21:59.979365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.111250       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1014 14:22:00.111664       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.192015       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1014 14:22:00.192346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.196984       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1014 14:22:00.197112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.197208       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1014 14:22:00.197241       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.212172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1014 14:22:00.212214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.260144       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1014 14:22:00.261002       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.276235       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1014 14:22:00.276419       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.295031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1014 14:22:00.295892       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.311664       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1014 14:22:00.313212       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1014 14:22:00.354351       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1014 14:22:00.355204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 14:22:02.117335       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 14 14:40:03 ha-132600 kubelet[2324]: E1014 14:40:03.676816    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:40:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:40:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:40:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:40:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:41:03 ha-132600 kubelet[2324]: E1014 14:41:03.675764    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:41:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:41:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:41:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:41:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:42:03 ha-132600 kubelet[2324]: E1014 14:42:03.676366    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:42:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:42:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:42:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:42:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:43:03 ha-132600 kubelet[2324]: E1014 14:43:03.675510    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:43:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:43:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:43:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:43:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:44:03 ha-132600 kubelet[2324]: E1014 14:44:03.676935    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:44:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:44:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:44:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:44:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-132600 -n ha-132600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-132600 -n ha-132600: (11.9050945s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-132600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-rng7p
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-132600 describe pod busybox-7dff88458-rng7p
helpers_test.go:282: (dbg) kubectl --context ha-132600 describe pod busybox-7dff88458-rng7p:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-rng7p
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9hpj2 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-9hpj2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                   From               Message
	  ----     ------            ----                  ----               -------
	  Warning  FailedScheduling  3m30s (x4 over 18m)   default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  2m7s (x2 over 2m17s)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterClusterStart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (67.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (68.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-132600 status --output json -v=7 --alsologtostderr
E1014 07:45:37.376274     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:328: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-132600 status --output json -v=7 --alsologtostderr: exit status 2 (35.4993142s)

                                                
                                                
-- stdout --
	[{"Name":"ha-132600","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-132600-m02","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false},{"Name":"ha-132600-m03","Host":"Running","Kubelet":"Running","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:45:03.720837   12516 out.go:345] Setting OutFile to fd 944 ...
	I1014 07:45:03.723276   12516 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:45:03.723349   12516 out.go:358] Setting ErrFile to fd 1460...
	I1014 07:45:03.723349   12516 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:45:03.739730   12516 out.go:352] Setting JSON to true
	I1014 07:45:03.739730   12516 mustload.go:65] Loading cluster: ha-132600
	I1014 07:45:03.739730   12516 notify.go:220] Checking for updates...
	I1014 07:45:03.740866   12516 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:45:03.740866   12516 status.go:174] checking status of ha-132600 ...
	I1014 07:45:03.741654   12516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:45:05.871079   12516 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:45:05.871079   12516 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:45:05.871218   12516 status.go:371] ha-132600 host status = "Running" (err=<nil>)
	I1014 07:45:05.871218   12516 host.go:66] Checking if "ha-132600" exists ...
	I1014 07:45:05.872024   12516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:45:07.996598   12516 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:45:07.996680   12516 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:45:07.996783   12516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:45:10.533961   12516 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:45:10.534101   12516 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:45:10.534101   12516 host.go:66] Checking if "ha-132600" exists ...
	I1014 07:45:10.545972   12516 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 07:45:10.547017   12516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:45:12.674463   12516 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:45:12.674463   12516 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:45:12.675134   12516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:45:15.278491   12516 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:45:15.278491   12516 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:45:15.279370   12516 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:45:15.372119   12516 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8250948s)
	I1014 07:45:15.384194   12516 ssh_runner.go:195] Run: systemctl --version
	I1014 07:45:15.407071   12516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 07:45:15.439023   12516 kubeconfig.go:125] found "ha-132600" server: "https://172.20.111.254:8443"
	I1014 07:45:15.439093   12516 api_server.go:166] Checking apiserver status ...
	I1014 07:45:15.450354   12516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 07:45:15.495842   12516 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2197/cgroup
	W1014 07:45:15.517595   12516 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2197/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1014 07:45:15.528979   12516 ssh_runner.go:195] Run: ls
	I1014 07:45:15.536463   12516 api_server.go:253] Checking apiserver healthz at https://172.20.111.254:8443/healthz ...
	I1014 07:45:15.544395   12516 api_server.go:279] https://172.20.111.254:8443/healthz returned 200:
	ok
	I1014 07:45:15.544395   12516 status.go:463] ha-132600 apiserver status = Running (err=<nil>)
	I1014 07:45:15.544469   12516 status.go:176] ha-132600 status: &{Name:ha-132600 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 07:45:15.544469   12516 status.go:174] checking status of ha-132600-m02 ...
	I1014 07:45:15.545093   12516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:45:17.680250   12516 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:45:17.680250   12516 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:45:17.680250   12516 status.go:371] ha-132600-m02 host status = "Running" (err=<nil>)
	I1014 07:45:17.680861   12516 host.go:66] Checking if "ha-132600-m02" exists ...
	I1014 07:45:17.681536   12516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:45:19.840467   12516 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:45:19.840467   12516 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:45:19.841478   12516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:45:22.416667   12516 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:45:22.416725   12516 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:45:22.416725   12516 host.go:66] Checking if "ha-132600-m02" exists ...
	I1014 07:45:22.430208   12516 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 07:45:22.430208   12516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:45:24.570456   12516 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:45:24.570731   12516 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:45:24.570731   12516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:45:27.146525   12516 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:45:27.146683   12516 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:45:27.146683   12516 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:45:27.254965   12516 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8247496s)
	I1014 07:45:27.266710   12516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 07:45:27.293989   12516 kubeconfig.go:125] found "ha-132600" server: "https://172.20.111.254:8443"
	I1014 07:45:27.294076   12516 api_server.go:166] Checking apiserver status ...
	I1014 07:45:27.305073   12516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1014 07:45:27.332714   12516 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1014 07:45:27.332714   12516 status.go:463] ha-132600-m02 apiserver status = Stopped (err=<nil>)
	I1014 07:45:27.332714   12516 status.go:176] ha-132600-m02 status: &{Name:ha-132600-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 07:45:27.332714   12516 status.go:174] checking status of ha-132600-m03 ...
	I1014 07:45:27.333647   12516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m03 ).state
	I1014 07:45:29.446192   12516 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:45:29.446512   12516 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:45:29.446512   12516 status.go:371] ha-132600-m03 host status = "Running" (err=<nil>)
	I1014 07:45:29.446608   12516 host.go:66] Checking if "ha-132600-m03" exists ...
	I1014 07:45:29.447418   12516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m03 ).state
	I1014 07:45:31.612800   12516 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:45:31.612952   12516 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:45:31.613162   12516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m03 ).networkadapters[0]).ipaddresses[0]
	I1014 07:45:34.169597   12516 main.go:141] libmachine: [stdout =====>] : 172.20.111.174
	
	I1014 07:45:34.169597   12516 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:45:34.170343   12516 host.go:66] Checking if "ha-132600-m03" exists ...
	I1014 07:45:34.182813   12516 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 07:45:34.182813   12516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m03 ).state
	I1014 07:45:36.365837   12516 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:45:36.366829   12516 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:45:36.366829   12516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m03 ).networkadapters[0]).ipaddresses[0]
	I1014 07:45:38.938512   12516 main.go:141] libmachine: [stdout =====>] : 172.20.111.174
	
	I1014 07:45:38.938512   12516 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:45:38.939562   12516 sshutil.go:53] new ssh client: &{IP:172.20.111.174 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m03\id_rsa Username:docker}
	I1014 07:45:39.029987   12516 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.8471674s)
	I1014 07:45:39.041474   12516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 07:45:39.067241   12516 status.go:176] ha-132600-m03 status: &{Name:ha-132600-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:330: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-132600 status --output json -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-132600 -n ha-132600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-132600 -n ha-132600: (12.1416771s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-132600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-132600 logs -n 25: (8.2649203s)
helpers_test.go:252: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:37 PDT | 14 Oct 24 07:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:37 PDT | 14 Oct 24 07:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-8thz6 --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | busybox-7dff88458-kr92j --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-rng7p --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-8thz6 --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | busybox-7dff88458-kr92j --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-rng7p --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-8thz6 -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | busybox-7dff88458-kr92j -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-rng7p -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-8thz6              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | busybox-7dff88458-kr92j              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-kr92j -- sh        |           |                   |         |                     |                     |
	|         | -c ping -c 1 172.20.96.1             |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-rng7p              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| node    | add -p ha-132600 -v=7                | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:39 PDT | 14 Oct 24 07:42 PDT |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 07:19:00
	Running on machine: minikube1
	Binary: Built with gc go1.23.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 07:19:00.342865   13076 out.go:345] Setting OutFile to fd 1228 ...
	I1014 07:19:00.344546   13076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:19:00.344546   13076 out.go:358] Setting ErrFile to fd 824...
	I1014 07:19:00.344546   13076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:19:00.368247   13076 out.go:352] Setting JSON to false
	I1014 07:19:00.372277   13076 start.go:129] hostinfo: {"hostname":"minikube1","uptime":101054,"bootTime":1728814485,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5011 Build 19045.5011","kernelVersion":"10.0.19045.5011 Build 19045.5011","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1014 07:19:00.372803   13076 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:19:00.379728   13076 out.go:177] * [ha-132600] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	I1014 07:19:00.386820   13076 notify.go:220] Checking for updates...
	I1014 07:19:00.389571   13076 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:19:00.392679   13076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:19:00.395167   13076 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1014 07:19:00.398285   13076 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:19:00.400520   13076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:19:00.404118   13076 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:19:05.654471   13076 out.go:177] * Using the hyperv driver based on user configuration
	I1014 07:19:05.658285   13076 start.go:297] selected driver: hyperv
	I1014 07:19:05.658285   13076 start.go:901] validating driver "hyperv" against <nil>
	I1014 07:19:05.658285   13076 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:19:05.705211   13076 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 07:19:05.706541   13076 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:19:05.706541   13076 cni.go:84] Creating CNI manager for ""
	I1014 07:19:05.706541   13076 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1014 07:19:05.706541   13076 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 07:19:05.707066   13076 start.go:340] cluster config:
	{Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:19:05.707207   13076 iso.go:125] acquiring lock: {Name:mkb71635bcbccba29dce9048ad4cc71430a7e577 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:19:05.711264   13076 out.go:177] * Starting "ha-132600" primary control-plane node in "ha-132600" cluster
	I1014 07:19:05.715097   13076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:19:05.715262   13076 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1014 07:19:05.715344   13076 cache.go:56] Caching tarball of preloaded images
	I1014 07:19:05.715452   13076 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1014 07:19:05.715452   13076 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:19:05.716553   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:19:05.716898   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json: {Name:mkbd11bf3f90adebf1f4630d2e8deaac328748d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:19:05.717886   13076 start.go:360] acquireMachinesLock for ha-132600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:19:05.717886   13076 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-132600"
	I1014 07:19:05.718562   13076 start.go:93] Provisioning new machine with config: &{Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:19:05.718562   13076 start.go:125] createHost starting for "" (driver="hyperv")
	I1014 07:19:05.721891   13076 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:19:05.722555   13076 start.go:159] libmachine.API.Create for "ha-132600" (driver="hyperv")
	I1014 07:19:05.722555   13076 client.go:168] LocalClient.Create starting
	I1014 07:19:05.722555   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I1014 07:19:05.723316   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:19:05.723316   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:19:05.723316   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I1014 07:19:05.723877   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:19:05.723955   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:19:05.723955   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1014 07:19:07.752927   13076 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1014 07:19:07.753576   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:07.753793   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1014 07:19:09.445838   13076 main.go:141] libmachine: [stdout =====>] : False
	
	I1014 07:19:09.445838   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:09.446315   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:19:10.901034   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:19:10.901034   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:10.901034   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:19:14.320078   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:19:14.320309   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:14.323253   13076 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 07:19:14.829443   13076 main.go:141] libmachine: Creating SSH key...
	I1014 07:19:14.988345   13076 main.go:141] libmachine: Creating VM...
	I1014 07:19:14.988345   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:19:17.790932   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:19:17.791005   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:17.791145   13076 main.go:141] libmachine: Using switch "Default Switch"
	I1014 07:19:17.791145   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:19:19.529861   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:19:19.529861   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:19.529861   13076 main.go:141] libmachine: Creating VHD
	I1014 07:19:19.530436   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\fixed.vhd' -SizeBytes 10MB -Fixed
	I1014 07:19:23.105904   13076 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 267BC11D-A195-4636-8AE9-EF1D7966C95C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1014 07:19:23.105987   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:23.106037   13076 main.go:141] libmachine: Writing magic tar header
	I1014 07:19:23.106037   13076 main.go:141] libmachine: Writing SSH key tar header
	I1014 07:19:23.118040   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\disk.vhd' -VHDType Dynamic -DeleteSource
	I1014 07:19:26.201609   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:26.201950   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:26.202023   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\disk.vhd' -SizeBytes 20000MB
	I1014 07:19:28.768368   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:28.768789   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:28.768969   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-132600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1014 07:19:32.226690   13076 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-132600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1014 07:19:32.226915   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:32.226915   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-132600 -DynamicMemoryEnabled $false
	I1014 07:19:34.390731   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:34.390837   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:34.391049   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-132600 -Count 2
	I1014 07:19:36.449382   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:36.449628   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:36.449628   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-132600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\boot2docker.iso'
	I1014 07:19:38.932909   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:38.932909   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:38.932909   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-132600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\disk.vhd'
	I1014 07:19:41.509229   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:41.509229   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:41.509229   13076 main.go:141] libmachine: Starting VM...
	I1014 07:19:41.509229   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-132600
	I1014 07:19:44.718680   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:44.718680   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:44.719475   13076 main.go:141] libmachine: Waiting for host to start...
	I1014 07:19:44.719770   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:19:46.954823   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:19:46.954823   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:46.955829   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:19:49.408227   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:49.408227   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:50.409369   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:19:52.561607   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:19:52.561912   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:52.561912   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:19:55.081136   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:55.081187   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:56.082401   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:19:58.271497   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:19:58.271497   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:58.272384   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:00.740863   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:20:00.741070   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:01.741536   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:03.902654   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:03.902721   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:03.902721   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:06.378064   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:20:06.378064   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:07.379104   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:09.537329   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:09.537329   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:09.537541   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:12.074320   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:12.074320   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:12.074834   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:14.171613   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:14.171613   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:14.171613   13076 machine.go:93] provisionDockerMachine start ...
	I1014 07:20:14.172377   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:16.225265   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:16.225265   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:16.225634   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:18.692061   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:18.692061   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:18.698261   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:18.713495   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:18.713568   13076 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 07:20:18.839311   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 07:20:18.839517   13076 buildroot.go:166] provisioning hostname "ha-132600"
	I1014 07:20:18.839517   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:20.911254   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:20.911424   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:20.911601   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:23.370138   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:23.370756   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:23.377008   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:23.377773   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:23.377773   13076 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-132600 && echo "ha-132600" | sudo tee /etc/hostname
	I1014 07:20:23.537548   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-132600
	
	I1014 07:20:23.537548   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:25.571429   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:25.571429   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:25.572326   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:28.068677   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:28.068677   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:28.077026   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:28.078615   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:28.078615   13076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-132600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-132600/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-132600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 07:20:28.216809   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 07:20:28.216874   13076 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1014 07:20:28.216990   13076 buildroot.go:174] setting up certificates
	I1014 07:20:28.217016   13076 provision.go:84] configureAuth start
	I1014 07:20:28.217047   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:30.246758   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:30.247093   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:30.247368   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:32.688428   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:32.688428   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:32.688428   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:34.748908   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:34.748908   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:34.749349   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:37.221628   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:37.221798   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:37.221798   13076 provision.go:143] copyHostCerts
	I1014 07:20:37.221998   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1014 07:20:37.222444   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1014 07:20:37.222558   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1014 07:20:37.223110   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1014 07:20:37.224109   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1014 07:20:37.224599   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1014 07:20:37.224599   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1014 07:20:37.224959   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I1014 07:20:37.225332   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1014 07:20:37.225332   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1014 07:20:37.225332   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1014 07:20:37.226765   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1014 07:20:37.227733   13076 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-132600 san=[127.0.0.1 172.20.108.120 ha-132600 localhost minikube]
	I1014 07:20:37.731674   13076 provision.go:177] copyRemoteCerts
	I1014 07:20:37.744706   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 07:20:37.744706   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:39.801309   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:39.801309   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:39.801309   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:42.273306   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:42.273367   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:42.273367   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:20:42.378794   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6340227s)
	I1014 07:20:42.378847   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1014 07:20:42.378847   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I1014 07:20:42.424772   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1014 07:20:42.425289   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 07:20:42.471397   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1014 07:20:42.471964   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 07:20:42.527104   13076 provision.go:87] duration metric: took 14.3100395s to configureAuth
	I1014 07:20:42.527104   13076 buildroot.go:189] setting minikube options for container-runtime
	I1014 07:20:42.527712   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:20:42.528252   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:44.598308   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:44.598308   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:44.599305   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:47.040424   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:47.040637   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:47.045726   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:47.046398   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:47.046398   13076 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1014 07:20:47.167710   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1014 07:20:47.167801   13076 buildroot.go:70] root file system type: tmpfs
	I1014 07:20:47.168208   13076 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1014 07:20:47.168315   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:49.202111   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:49.202331   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:49.202424   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:51.648278   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:51.648278   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:51.654248   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:51.655052   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:51.655052   13076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1014 07:20:51.814048   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1014 07:20:51.814229   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:53.871643   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:53.871643   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:53.872030   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:56.286054   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:56.286054   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:56.292357   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:56.293125   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:56.293125   13076 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1014 07:20:58.458028   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1014 07:20:58.458028   13076 machine.go:96] duration metric: took 44.286361s to provisionDockerMachine
	I1014 07:20:58.458028   13076 client.go:171] duration metric: took 1m52.7353371s to LocalClient.Create
	I1014 07:20:58.458028   13076 start.go:167] duration metric: took 1m52.7353371s to libmachine.API.Create "ha-132600"
	I1014 07:20:58.458028   13076 start.go:293] postStartSetup for "ha-132600" (driver="hyperv")
	I1014 07:20:58.458028   13076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 07:20:58.471262   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 07:20:58.471262   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:00.512590   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:00.513162   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:00.513321   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:02.924795   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:02.924929   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:02.925163   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:21:03.032582   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5613141s)
	I1014 07:21:03.044373   13076 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 07:21:03.050831   13076 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 07:21:03.050831   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1014 07:21:03.051271   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1014 07:21:03.052198   13076 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> 9362.pem in /etc/ssl/certs
	I1014 07:21:03.052295   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /etc/ssl/certs/9362.pem
	I1014 07:21:03.063873   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 07:21:03.081968   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /etc/ssl/certs/9362.pem (1708 bytes)
	I1014 07:21:03.128637   13076 start.go:296] duration metric: took 4.6706031s for postStartSetup
	I1014 07:21:03.132031   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:05.196102   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:05.196622   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:05.196696   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:07.624940   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:07.625781   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:07.626103   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:21:07.629046   13076 start.go:128] duration metric: took 2m1.9103356s to createHost
	I1014 07:21:07.629046   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:09.687162   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:09.687420   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:09.687490   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:12.104556   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:12.104556   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:12.110165   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:21:12.110572   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:21:12.110572   13076 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 07:21:12.241524   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728915672.238854784
	
	I1014 07:21:12.241655   13076 fix.go:216] guest clock: 1728915672.238854784
	I1014 07:21:12.241726   13076 fix.go:229] Guest: 2024-10-14 07:21:12.238854784 -0700 PDT Remote: 2024-10-14 07:21:07.6290467 -0700 PDT m=+127.381500801 (delta=4.609808084s)
	I1014 07:21:12.241841   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:14.299338   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:14.299599   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:14.299599   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:16.767477   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:16.767665   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:16.775565   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:21:16.775565   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:21:16.775565   13076 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1728915672
	I1014 07:21:16.921038   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 14 14:21:12 UTC 2024
	
	I1014 07:21:16.921096   13076 fix.go:236] clock set: Mon Oct 14 14:21:12 UTC 2024
	 (err=<nil>)
	I1014 07:21:16.921169   13076 start.go:83] releasing machines lock for "ha-132600", held for 2m11.2030498s
	I1014 07:21:16.921319   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:18.988904   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:18.989898   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:18.989947   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:21.410167   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:21.410403   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:21.414482   13076 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1014 07:21:21.414683   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:21.423844   13076 ssh_runner.go:195] Run: cat /version.json
	I1014 07:21:21.423844   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:23.554001   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:23.554206   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:23.554362   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:23.565882   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:23.565882   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:23.565882   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:26.092249   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:26.092249   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:26.092379   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:21:26.119019   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:26.119086   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:26.119215   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:21:26.179573   13076 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.764971s)
	W1014 07:21:26.179670   13076 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1014 07:21:26.212879   13076 ssh_runner.go:235] Completed: cat /version.json: (4.7890281s)
	I1014 07:21:26.223815   13076 ssh_runner.go:195] Run: systemctl --version
	I1014 07:21:26.243251   13076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 07:21:26.251321   13076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 07:21:26.262431   13076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 07:21:26.290027   13076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 07:21:26.290027   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:21:26.290027   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1014 07:21:26.296743   13076 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1014 07:21:26.296743   13076 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1014 07:21:26.340326   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1014 07:21:26.370362   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1014 07:21:26.391878   13076 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 07:21:26.402847   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 07:21:26.433241   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:21:26.462619   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 07:21:26.491735   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:21:26.520404   13076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 07:21:26.550615   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 07:21:26.580278   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 07:21:26.608564   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 07:21:26.636849   13076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 07:21:26.656448   13076 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 07:21:26.667504   13076 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 07:21:26.700184   13076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 07:21:26.726834   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:26.916163   13076 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 07:21:26.953200   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:21:26.967627   13076 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1014 07:21:27.005080   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:21:27.039193   13076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 07:21:27.083701   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:21:27.118670   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:21:27.153508   13076 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 07:21:27.209689   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:21:27.230765   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:21:27.273213   13076 ssh_runner.go:195] Run: which cri-dockerd
	I1014 07:21:27.291783   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1014 07:21:27.308581   13076 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1014 07:21:27.351709   13076 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1014 07:21:27.539571   13076 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1014 07:21:27.725567   13076 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1014 07:21:27.725567   13076 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1014 07:21:27.768723   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:27.951142   13076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:21:30.486027   13076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5347969s)
	I1014 07:21:30.497722   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1014 07:21:30.531680   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 07:21:30.563742   13076 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1014 07:21:30.767218   13076 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1014 07:21:30.967093   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:31.191192   13076 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1014 07:21:31.229757   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 07:21:31.267426   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:31.462408   13076 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1014 07:21:31.569509   13076 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1014 07:21:31.581151   13076 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1014 07:21:31.591322   13076 start.go:563] Will wait 60s for crictl version
	I1014 07:21:31.604845   13076 ssh_runner.go:195] Run: which crictl
	I1014 07:21:31.622874   13076 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 07:21:31.680621   13076 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1014 07:21:31.689423   13076 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 07:21:31.732594   13076 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 07:21:31.766375   13076 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1014 07:21:31.766375   13076 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:08:65:4d Flags:up|broadcast|multicast|running}
	I1014 07:21:31.773955   13076 ip.go:214] interface addr: fe80::b548:ff79:d464:75b8/64
	I1014 07:21:31.773955   13076 ip.go:214] interface addr: 172.20.96.1/20
	I1014 07:21:31.784171   13076 ssh_runner.go:195] Run: grep 172.20.96.1	host.minikube.internal$ /etc/hosts
	I1014 07:21:31.791192   13076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 07:21:31.825649   13076 kubeadm.go:883] updating cluster {Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 07:21:31.825649   13076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:21:31.834667   13076 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 07:21:31.867707   13076 docker.go:689] Got preloaded images: 
	I1014 07:21:31.867707   13076 docker.go:695] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I1014 07:21:31.880238   13076 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1014 07:21:31.908893   13076 ssh_runner.go:195] Run: which lz4
	I1014 07:21:31.914374   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1014 07:21:31.924715   13076 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 07:21:31.930846   13076 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 07:21:31.931075   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I1014 07:21:33.829320   13076 docker.go:653] duration metric: took 1.9148657s to copy over tarball
	I1014 07:21:33.840382   13076 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 07:21:42.539043   13076 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6986506s)
	I1014 07:21:42.539174   13076 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 07:21:42.611065   13076 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1014 07:21:42.636639   13076 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I1014 07:21:42.681041   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:42.879066   13076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:21:46.161713   13076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.2824935s)
	I1014 07:21:46.173131   13076 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 07:21:46.198645   13076 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1014 07:21:46.198645   13076 cache_images.go:84] Images are preloaded, skipping loading
	I1014 07:21:46.198645   13076 kubeadm.go:934] updating node { 172.20.108.120 8443 v1.31.1 docker true true} ...
	I1014 07:21:46.198645   13076 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-132600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.108.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 07:21:46.209892   13076 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1014 07:21:46.283733   13076 cni.go:84] Creating CNI manager for ""
	I1014 07:21:46.283861   13076 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 07:21:46.283932   13076 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 07:21:46.284000   13076 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.108.120 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-132600 NodeName:ha-132600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.108.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.108.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 07:21:46.284290   13076 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.108.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-132600"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.20.108.120"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.108.120"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 07:21:46.284367   13076 kube-vip.go:115] generating kube-vip config ...
	I1014 07:21:46.295693   13076 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1014 07:21:46.324017   13076 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1014 07:21:46.324230   13076 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.20.111.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1014 07:21:46.334184   13076 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 07:21:46.349321   13076 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 07:21:46.360771   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 07:21:46.379224   13076 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I1014 07:21:46.412080   13076 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 07:21:46.441691   13076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1014 07:21:46.479025   13076 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1014 07:21:46.519461   13076 ssh_runner.go:195] Run: grep 172.20.111.254	control-plane.minikube.internal$ /etc/hosts
	I1014 07:21:46.525021   13076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.111.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 07:21:46.554943   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:46.730466   13076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 07:21:46.761153   13076 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600 for IP: 172.20.108.120
	I1014 07:21:46.761153   13076 certs.go:194] generating shared ca certs ...
	I1014 07:21:46.761153   13076 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:46.761825   13076 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I1014 07:21:46.762680   13076 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I1014 07:21:46.762846   13076 certs.go:256] generating profile certs ...
	I1014 07:21:46.763605   13076 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.key
	I1014 07:21:46.763605   13076 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.crt with IP's: []
	I1014 07:21:46.955865   13076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.crt ...
	I1014 07:21:46.955865   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.crt: {Name:mk4a5e51a4b9d62279c7ef4ef47716bcdf88d704 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:46.957857   13076 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.key ...
	I1014 07:21:46.957857   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.key: {Name:mkc80e99cfe4d8d17198a87fb188cfa1a72d727d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:46.958881   13076 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61
	I1014 07:21:46.958881   13076 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.108.120 172.20.111.254]
	I1014 07:21:47.375660   13076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61 ...
	I1014 07:21:47.375660   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61: {Name:mke858ab0ac103663123708f9814cfdffe03211e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.377216   13076 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61 ...
	I1014 07:21:47.377216   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61: {Name:mk972fca892a193d6b4e84d42e27ed86ce9f0457 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.377811   13076 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt
	I1014 07:21:47.391806   13076 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key
	I1014 07:21:47.392823   13076 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key
	I1014 07:21:47.392823   13076 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt with IP's: []
	I1014 07:21:47.675751   13076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt ...
	I1014 07:21:47.675751   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt: {Name:mk5a9f1239213dd0a0f36b4ee41d5ac56cbd79c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.677918   13076 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key ...
	I1014 07:21:47.677918   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key: {Name:mkb931094180fef1516b0d43acb6430ecbcbb3fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 07:21:47.679943   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 07:21:47.679943   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 07:21:47.679943   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 07:21:47.691415   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 07:21:47.693468   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem (1338 bytes)
	W1014 07:21:47.693468   13076 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936_empty.pem, impossibly tiny 0 bytes
	I1014 07:21:47.694004   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1014 07:21:47.694400   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1014 07:21:47.694671   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1014 07:21:47.695091   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1014 07:21:47.695960   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem (1708 bytes)
	I1014 07:21:47.696145   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem -> /usr/share/ca-certificates/936.pem
	I1014 07:21:47.696401   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /usr/share/ca-certificates/9362.pem
	I1014 07:21:47.696580   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:47.697580   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 07:21:47.742705   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 07:21:47.790988   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 07:21:47.837975   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 07:21:47.884872   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 07:21:47.936878   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 07:21:47.985330   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 07:21:48.029181   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 07:21:48.078824   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem --> /usr/share/ca-certificates/936.pem (1338 bytes)
	I1014 07:21:48.127907   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /usr/share/ca-certificates/9362.pem (1708 bytes)
	I1014 07:21:48.174542   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 07:21:48.218635   13076 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 07:21:48.264009   13076 ssh_runner.go:195] Run: openssl version
	I1014 07:21:48.282939   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 07:21:48.310256   13076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:48.318050   13076 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:44 /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:48.328768   13076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:48.348090   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 07:21:48.376484   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936.pem && ln -fs /usr/share/ca-certificates/936.pem /etc/ssl/certs/936.pem"
	I1014 07:21:48.408494   13076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936.pem
	I1014 07:21:48.416513   13076 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 14:00 /usr/share/ca-certificates/936.pem
	I1014 07:21:48.427815   13076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936.pem
	I1014 07:21:48.447033   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/936.pem /etc/ssl/certs/51391683.0"
	I1014 07:21:48.474293   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9362.pem && ln -fs /usr/share/ca-certificates/9362.pem /etc/ssl/certs/9362.pem"
	I1014 07:21:48.502293   13076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9362.pem
	I1014 07:21:48.509688   13076 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 14:00 /usr/share/ca-certificates/9362.pem
	I1014 07:21:48.519811   13076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9362.pem
	I1014 07:21:48.540403   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9362.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 07:21:48.569007   13076 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 07:21:48.575758   13076 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 07:21:48.576118   13076 kubeadm.go:392] StartCluster: {Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clu
sterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:21:48.584978   13076 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1014 07:21:48.619430   13076 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 07:21:48.647606   13076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 07:21:48.676291   13076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 07:21:48.693679   13076 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 07:21:48.693679   13076 kubeadm.go:157] found existing configuration files:
	
	I1014 07:21:48.703613   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 07:21:48.722067   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 07:21:48.732618   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 07:21:48.759836   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 07:21:48.777622   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 07:21:48.789748   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 07:21:48.818512   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 07:21:48.837396   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 07:21:48.848696   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 07:21:48.876926   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 07:21:48.893530   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 07:21:48.904372   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 07:21:48.920210   13076 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 07:21:49.392185   13076 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 07:22:04.140256   13076 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 07:22:04.140412   13076 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 07:22:04.140501   13076 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 07:22:04.140848   13076 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 07:22:04.141206   13076 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 07:22:04.141311   13076 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 07:22:04.144180   13076 out.go:235]   - Generating certificates and keys ...
	I1014 07:22:04.144373   13076 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 07:22:04.144575   13076 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 07:22:04.144939   13076 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 07:22:04.145172   13076 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1014 07:22:04.145315   13076 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1014 07:22:04.145521   13076 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1014 07:22:04.145845   13076 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1014 07:22:04.146212   13076 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-132600 localhost] and IPs [172.20.108.120 127.0.0.1 ::1]
	I1014 07:22:04.146365   13076 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1014 07:22:04.146614   13076 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-132600 localhost] and IPs [172.20.108.120 127.0.0.1 ::1]
	I1014 07:22:04.147003   13076 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 07:22:04.147200   13076 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 07:22:04.147236   13076 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1014 07:22:04.147236   13076 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 07:22:04.147236   13076 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 07:22:04.147236   13076 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 07:22:04.147954   13076 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 07:22:04.147987   13076 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 07:22:04.147987   13076 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 07:22:04.147987   13076 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 07:22:04.148785   13076 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 07:22:04.151235   13076 out.go:235]   - Booting up control plane ...
	I1014 07:22:04.151235   13076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 07:22:04.151819   13076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 07:22:04.152046   13076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 07:22:04.152333   13076 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 07:22:04.153426   13076 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 523.263843ms
	I1014 07:22:04.153534   13076 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 07:22:04.153534   13076 kubeadm.go:310] [api-check] The API server is healthy after 9.176461865s
	I1014 07:22:04.153534   13076 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 07:22:04.154243   13076 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 07:22:04.154243   13076 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 07:22:04.154779   13076 kubeadm.go:310] [mark-control-plane] Marking the node ha-132600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 07:22:04.154779   13076 kubeadm.go:310] [bootstrap-token] Using token: rm3d8a.inqsemtt5b5l5x2e
	I1014 07:22:04.157926   13076 out.go:235]   - Configuring RBAC rules ...
	I1014 07:22:04.158760   13076 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 07:22:04.158928   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 07:22:04.158928   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 07:22:04.159656   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 07:22:04.159851   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 07:22:04.160089   13076 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 07:22:04.160270   13076 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 07:22:04.160529   13076 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 07:22:04.160529   13076 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 07:22:04.160529   13076 kubeadm.go:310] 
	I1014 07:22:04.160529   13076 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 07:22:04.160529   13076 kubeadm.go:310] 
	I1014 07:22:04.160529   13076 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 07:22:04.160529   13076 kubeadm.go:310] 
	I1014 07:22:04.161100   13076 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 07:22:04.161186   13076 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 07:22:04.161352   13076 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 07:22:04.161352   13076 kubeadm.go:310] 
	I1014 07:22:04.161534   13076 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 07:22:04.161534   13076 kubeadm.go:310] 
	I1014 07:22:04.161706   13076 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 07:22:04.161706   13076 kubeadm.go:310] 
	I1014 07:22:04.161950   13076 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 07:22:04.161950   13076 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 07:22:04.161950   13076 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 07:22:04.161950   13076 kubeadm.go:310] 
	I1014 07:22:04.162515   13076 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 07:22:04.162607   13076 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 07:22:04.162607   13076 kubeadm.go:310] 
	I1014 07:22:04.162607   13076 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rm3d8a.inqsemtt5b5l5x2e \
	I1014 07:22:04.162607   13076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f876f951efbe7f1ed47ca578ee0d33e6ce8fe69c1a008a8154ee00f56ac84e46 \
	I1014 07:22:04.163162   13076 kubeadm.go:310] 	--control-plane 
	I1014 07:22:04.163261   13076 kubeadm.go:310] 
	I1014 07:22:04.163340   13076 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 07:22:04.163340   13076 kubeadm.go:310] 
	I1014 07:22:04.163570   13076 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rm3d8a.inqsemtt5b5l5x2e \
	I1014 07:22:04.163570   13076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f876f951efbe7f1ed47ca578ee0d33e6ce8fe69c1a008a8154ee00f56ac84e46 
	I1014 07:22:04.163570   13076 cni.go:84] Creating CNI manager for ""
	I1014 07:22:04.163570   13076 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 07:22:04.167124   13076 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1014 07:22:04.179134   13076 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 07:22:04.187701   13076 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1014 07:22:04.187701   13076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1014 07:22:04.233053   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1014 07:22:04.912894   13076 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 07:22:04.925574   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-132600 minikube.k8s.io/updated_at=2024_10_14T07_22_04_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=ha-132600 minikube.k8s.io/primary=true
	I1014 07:22:04.927795   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:04.958241   13076 ops.go:34] apiserver oom_adj: -16
	I1014 07:22:05.168034   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:05.667048   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:06.166157   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:06.667561   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:07.167283   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:07.305312   13076 kubeadm.go:1113] duration metric: took 2.3924152s to wait for elevateKubeSystemPrivileges
	I1014 07:22:07.306284   13076 kubeadm.go:394] duration metric: took 18.730142s to StartCluster
	I1014 07:22:07.306284   13076 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:22:07.306284   13076 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:22:07.308077   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:22:07.309777   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 07:22:07.309777   13076 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:22:07.309953   13076 start.go:241] waiting for startup goroutines ...
	I1014 07:22:07.309868   13076 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 07:22:07.310143   13076 addons.go:69] Setting storage-provisioner=true in profile "ha-132600"
	I1014 07:22:07.310235   13076 addons.go:234] Setting addon storage-provisioner=true in "ha-132600"
	I1014 07:22:07.310143   13076 addons.go:69] Setting default-storageclass=true in profile "ha-132600"
	I1014 07:22:07.310402   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:22:07.310402   13076 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-132600"
	I1014 07:22:07.310402   13076 host.go:66] Checking if "ha-132600" exists ...
	I1014 07:22:07.311457   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:07.311926   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:07.495748   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.96.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 07:22:08.130501   13076 start.go:971] {"host.minikube.internal": 172.20.96.1} host record injected into CoreDNS's ConfigMap
	I1014 07:22:09.594501   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:09.594501   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:09.594786   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:09.594878   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:09.595740   13076 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:22:09.596691   13076 kapi.go:59] client config for ha-132600: &rest.Config{Host:"https://172.20.111.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-132600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-132600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2926ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 07:22:09.597939   13076 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:22:09.598267   13076 cert_rotation.go:140] Starting client certificate rotation controller
	I1014 07:22:09.598703   13076 addons.go:234] Setting addon default-storageclass=true in "ha-132600"
	I1014 07:22:09.598956   13076 host.go:66] Checking if "ha-132600" exists ...
	I1014 07:22:09.600143   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:09.600143   13076 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 07:22:09.600247   13076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 07:22:09.600352   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:11.901872   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:11.901872   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:11.902855   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:22:11.913525   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:11.914554   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:11.914725   13076 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 07:22:11.914841   13076 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 07:22:11.915002   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:14.163865   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:14.163865   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:14.163865   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:22:14.597752   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:22:14.597752   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:14.598273   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:22:14.755906   13076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 07:22:16.766331   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:22:16.766798   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:16.767079   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:22:16.907784   13076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 07:22:17.107355   13076 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 07:22:17.107355   13076 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 07:22:17.108322   13076 round_trippers.go:463] GET https://172.20.111.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1014 07:22:17.108322   13076 round_trippers.go:469] Request Headers:
	I1014 07:22:17.108322   13076 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:22:17.108322   13076 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:22:17.120122   13076 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1014 07:22:17.120927   13076 round_trippers.go:463] PUT https://172.20.111.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1014 07:22:17.121006   13076 round_trippers.go:469] Request Headers:
	I1014 07:22:17.121006   13076 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:22:17.121006   13076 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:22:17.121089   13076 round_trippers.go:473]     Content-Type: application/json
	I1014 07:22:17.124876   13076 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 07:22:17.128782   13076 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1014 07:22:17.133792   13076 addons.go:510] duration metric: took 9.8239112s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1014 07:22:17.133792   13076 start.go:246] waiting for cluster config update ...
	I1014 07:22:17.133792   13076 start.go:255] writing updated cluster config ...
	I1014 07:22:17.140793   13076 out.go:201] 
	I1014 07:22:17.150794   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:22:17.151784   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:22:17.157758   13076 out.go:177] * Starting "ha-132600-m02" control-plane node in "ha-132600" cluster
	I1014 07:22:17.160381   13076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:22:17.160381   13076 cache.go:56] Caching tarball of preloaded images
	I1014 07:22:17.160381   13076 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1014 07:22:17.160381   13076 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:22:17.160381   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:22:17.168672   13076 start.go:360] acquireMachinesLock for ha-132600-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:22:17.169194   13076 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-132600-m02"
	I1014 07:22:17.169334   13076 start.go:93] Provisioning new machine with config: &{Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:22:17.169334   13076 start.go:125] createHost starting for "m02" (driver="hyperv")
	I1014 07:22:17.171957   13076 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:22:17.171957   13076 start.go:159] libmachine.API.Create for "ha-132600" (driver="hyperv")
	I1014 07:22:17.171957   13076 client.go:168] LocalClient.Create starting
	I1014 07:22:17.171957   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:22:17.173802   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:22:17.173802   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1014 07:22:19.094383   13076 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1014 07:22:19.094383   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:19.094970   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1014 07:22:20.808796   13076 main.go:141] libmachine: [stdout =====>] : False
	
	I1014 07:22:20.808796   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:20.808897   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:22:22.292959   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:22:22.292959   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:22.293074   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:22:25.811891   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:22:25.811891   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:25.813583   13076 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 07:22:26.334218   13076 main.go:141] libmachine: Creating SSH key...
	I1014 07:22:26.579833   13076 main.go:141] libmachine: Creating VM...
	I1014 07:22:26.580305   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:22:29.402237   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:22:29.402237   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:29.402237   13076 main.go:141] libmachine: Using switch "Default Switch"
	I1014 07:22:29.402237   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:22:31.181025   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:22:31.181025   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:31.181025   13076 main.go:141] libmachine: Creating VHD
	I1014 07:22:31.181025   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I1014 07:22:34.986705   13076 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 9BEBB030-6569-4A5D-A43D-C976E242B24D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1014 07:22:34.986953   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:34.986953   13076 main.go:141] libmachine: Writing magic tar header
	I1014 07:22:34.986953   13076 main.go:141] libmachine: Writing SSH key tar header
	I1014 07:22:34.998864   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I1014 07:22:38.117497   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:38.117497   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:38.117497   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\disk.vhd' -SizeBytes 20000MB
	I1014 07:22:40.750048   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:40.750048   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:40.750048   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-132600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1014 07:22:44.293371   13076 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-132600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1014 07:22:44.294009   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:44.294158   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-132600-m02 -DynamicMemoryEnabled $false
	I1014 07:22:46.495846   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:46.496106   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:46.496211   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-132600-m02 -Count 2
	I1014 07:22:48.637702   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:48.637702   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:48.637702   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-132600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\boot2docker.iso'
	I1014 07:22:51.128606   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:51.128606   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:51.128606   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-132600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\disk.vhd'
	I1014 07:22:53.765202   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:53.765775   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:53.765775   13076 main.go:141] libmachine: Starting VM...
	I1014 07:22:53.765847   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-132600-m02
	I1014 07:22:56.961396   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:56.961396   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:56.962040   13076 main.go:141] libmachine: Waiting for host to start...
	I1014 07:22:56.962040   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:22:59.251592   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:59.251592   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:59.251838   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:01.758067   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:01.758159   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:02.758293   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:04.921490   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:04.921565   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:04.921802   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:07.441595   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:07.441595   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:08.443053   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:10.659721   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:10.659721   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:10.660418   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:13.126618   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:13.126952   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:14.127183   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:16.307414   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:16.307414   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:16.307500   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:18.748902   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:18.749678   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:19.749962   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:21.893323   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:21.893323   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:21.893323   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:24.464351   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:24.465392   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:24.465471   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:26.567626   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:26.567626   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:26.567626   13076 machine.go:93] provisionDockerMachine start ...
	I1014 07:23:26.568400   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:28.681591   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:28.681591   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:28.682618   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:31.178308   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:31.178308   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:31.192309   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:31.207523   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:31.208024   13076 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 07:23:31.337861   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 07:23:31.337861   13076 buildroot.go:166] provisioning hostname "ha-132600-m02"
	I1014 07:23:31.337941   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:33.402839   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:33.402839   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:33.403340   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:35.894312   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:35.894312   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:35.901210   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:35.901699   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:35.901823   13076 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-132600-m02 && echo "ha-132600-m02" | sudo tee /etc/hostname
	I1014 07:23:36.071040   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-132600-m02
	
	I1014 07:23:36.071040   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:38.150707   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:38.151729   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:38.151729   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:40.639242   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:40.639242   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:40.644598   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:40.645419   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:40.645490   13076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-132600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-132600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-132600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 07:23:40.801974   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 07:23:40.801974   13076 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1014 07:23:40.801974   13076 buildroot.go:174] setting up certificates
	I1014 07:23:40.801974   13076 provision.go:84] configureAuth start
	I1014 07:23:40.801974   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:42.869101   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:42.869101   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:42.869101   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:45.382211   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:45.382688   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:45.382688   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:47.501182   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:47.501778   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:47.501832   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:50.008838   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:50.008838   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:50.009193   13076 provision.go:143] copyHostCerts
	I1014 07:23:50.009346   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1014 07:23:50.009346   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1014 07:23:50.009346   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1014 07:23:50.010015   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1014 07:23:50.011395   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1014 07:23:50.011967   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1014 07:23:50.011967   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1014 07:23:50.011967   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1014 07:23:50.013212   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1014 07:23:50.014109   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1014 07:23:50.014109   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1014 07:23:50.014507   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I1014 07:23:50.015649   13076 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-132600-m02 san=[127.0.0.1 172.20.111.83 ha-132600-m02 localhost minikube]
	I1014 07:23:50.130152   13076 provision.go:177] copyRemoteCerts
	I1014 07:23:50.140319   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 07:23:50.140319   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:52.242533   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:52.242533   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:52.243067   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:54.757999   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:54.758132   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:54.758132   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:23:54.867658   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7273333s)
	I1014 07:23:54.867658   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1014 07:23:54.867658   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 07:23:54.915126   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1014 07:23:54.915808   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 07:23:54.962775   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1014 07:23:54.963449   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 07:23:55.012733   13076 provision.go:87] duration metric: took 14.2107408s to configureAuth
	I1014 07:23:55.012733   13076 buildroot.go:189] setting minikube options for container-runtime
	I1014 07:23:55.013734   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:23:55.013734   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:57.174688   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:57.174688   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:57.175736   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:59.668037   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:59.668597   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:59.675074   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:59.675608   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:59.675873   13076 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1014 07:23:59.826626   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1014 07:23:59.826626   13076 buildroot.go:70] root file system type: tmpfs
	I1014 07:23:59.826926   13076 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1014 07:23:59.827038   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:01.963890   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:01.964568   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:01.964568   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:04.515125   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:04.515943   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:04.521824   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:04.522248   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:04.522248   13076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.20.108.120"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1014 07:24:04.691427   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.20.108.120
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1014 07:24:04.692090   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:06.819823   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:06.820339   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:06.820339   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:09.321630   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:09.321630   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:09.326748   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:09.327748   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:09.327808   13076 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1014 07:24:11.561493   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1014 07:24:11.561493   13076 machine.go:96] duration metric: took 44.993809s to provisionDockerMachine
	I1014 07:24:11.561493   13076 client.go:171] duration metric: took 1m54.3893961s to LocalClient.Create
	I1014 07:24:11.561493   13076 start.go:167] duration metric: took 1m54.3893961s to libmachine.API.Create "ha-132600"
	I1014 07:24:11.561493   13076 start.go:293] postStartSetup for "ha-132600-m02" (driver="hyperv")
	I1014 07:24:11.561493   13076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 07:24:11.572490   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 07:24:11.572490   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:13.652544   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:13.653401   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:13.653599   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:16.188638   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:16.188638   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:16.189304   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:24:16.298812   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7262867s)
	I1014 07:24:16.310231   13076 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 07:24:16.316927   13076 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 07:24:16.316927   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1014 07:24:16.317465   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1014 07:24:16.318548   13076 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> 9362.pem in /etc/ssl/certs
	I1014 07:24:16.318548   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /etc/ssl/certs/9362.pem
	I1014 07:24:16.332285   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 07:24:16.351469   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /etc/ssl/certs/9362.pem (1708 bytes)
	I1014 07:24:16.401820   13076 start.go:296] duration metric: took 4.8403207s for postStartSetup
	I1014 07:24:16.404096   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:18.500218   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:18.501234   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:18.501234   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:20.974982   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:20.975125   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:20.975125   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:24:20.978124   13076 start.go:128] duration metric: took 2m3.8086365s to createHost
	I1014 07:24:20.978209   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:23.112937   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:23.113001   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:23.113001   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:25.636675   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:25.637206   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:25.641953   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:25.642670   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:25.642670   13076 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 07:24:25.768945   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728915865.766702127
	
	I1014 07:24:25.769174   13076 fix.go:216] guest clock: 1728915865.766702127
	I1014 07:24:25.769174   13076 fix.go:229] Guest: 2024-10-14 07:24:25.766702127 -0700 PDT Remote: 2024-10-14 07:24:20.978124 -0700 PDT m=+320.730336401 (delta=4.788578127s)
	I1014 07:24:25.769174   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:27.845838   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:27.845838   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:27.845838   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:30.364175   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:30.364263   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:30.369382   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:30.369967   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:30.370049   13076 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1728915865
	I1014 07:24:30.508807   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 14 14:24:25 UTC 2024
	
	I1014 07:24:30.508807   13076 fix.go:236] clock set: Mon Oct 14 14:24:25 UTC 2024
	 (err=<nil>)
	I1014 07:24:30.508807   13076 start.go:83] releasing machines lock for "ha-132600-m02", held for 2m13.3394475s
	I1014 07:24:30.508807   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:32.570516   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:32.570516   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:32.570516   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:35.081338   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:35.081338   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:35.084330   13076 out.go:177] * Found network options:
	I1014 07:24:35.087197   13076 out.go:177]   - NO_PROXY=172.20.108.120
	W1014 07:24:35.089531   13076 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 07:24:35.091879   13076 out.go:177]   - NO_PROXY=172.20.108.120
	W1014 07:24:35.094409   13076 proxy.go:119] fail to check proxy env: Error ip not in block
	W1014 07:24:35.095622   13076 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 07:24:35.098960   13076 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1014 07:24:35.099124   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:35.109488   13076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1014 07:24:35.109488   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:37.293745   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:37.293818   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:37.293870   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:37.344951   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:37.344951   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:37.344951   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:39.933767   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:39.934139   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:39.934445   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:24:39.999458   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:39.999458   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:39.999458   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:24:40.039802   13076 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.930308s)
	W1014 07:24:40.039802   13076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 07:24:40.051889   13076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 07:24:40.057601   13076 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9585844s)
	W1014 07:24:40.057601   13076 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1014 07:24:40.090774   13076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 07:24:40.090774   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:24:40.091315   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:24:40.139757   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1014 07:24:40.172033   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1014 07:24:40.178804   13076 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1014 07:24:40.178804   13076 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1014 07:24:40.194040   13076 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 07:24:40.209169   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 07:24:40.241042   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:24:40.273436   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 07:24:40.304842   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:24:40.336705   13076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 07:24:40.371635   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 07:24:40.404378   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 07:24:40.435805   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 07:24:40.485319   13076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 07:24:40.505473   13076 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 07:24:40.516975   13076 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 07:24:40.550905   13076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 07:24:40.581632   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:24:40.785773   13076 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 07:24:40.819978   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:24:40.830992   13076 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1014 07:24:40.876801   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:24:40.913930   13076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 07:24:40.966444   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:24:41.006432   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:24:41.044531   13076 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 07:24:41.114617   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:24:41.139995   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:24:41.187779   13076 ssh_runner.go:195] Run: which cri-dockerd
	I1014 07:24:41.207367   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1014 07:24:41.226988   13076 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1014 07:24:41.272924   13076 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1014 07:24:41.470204   13076 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1014 07:24:41.650925   13076 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1014 07:24:41.650925   13076 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1014 07:24:41.694417   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:24:41.893295   13076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:25:43.008408   13076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.115034s)
	I1014 07:25:43.020423   13076 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1014 07:25:43.059257   13076 out.go:201] 
	W1014 07:25:43.062124   13076 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 14 14:24:09 ha-132600-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.938489944Z" level=info msg="Starting up"
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.941531731Z" level=info msg="containerd not running, starting managed containerd"
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.942638427Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	Oct 14 14:24:09 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:09.976195986Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.003978070Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004049170Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004119469Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004233069Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004318268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004457268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004664767Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004761167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004782067Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004795467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004887666Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.005331964Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008453752Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008545052Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008673551Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008759151Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008859951Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008988550Z" level=info msg="metadata content store policy set" policy=shared
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034095951Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034335550Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034407650Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034432250Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034449450Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034603849Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034945948Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035129847Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035263947Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035294447Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035312846Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035328546Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035343646Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035419846Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035443346Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035495946Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035526946Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035542746Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035565345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035583545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035598745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035630745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035648245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035663945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035689645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035709545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035731545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035750545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035764245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035777845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035792145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035809345Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035833944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035849444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035864444Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036025444Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036057644Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036074443Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036087843Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036099043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036115343Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036131843Z" level=info msg="NRI interface is disabled by configuration."
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036482842Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036737141Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036799541Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036822041Z" level=info msg="containerd successfully booted in 0.062283s"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.009541514Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.046546477Z" level=info msg="Loading containers: start."
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.205429391Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.426461627Z" level=info msg="Loading containers: done."
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448023414Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448125314Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448200613Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448511212Z" level=info msg="Daemon has completed initialization"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.556153647Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.556338446Z" level=info msg="API listen on [::]:2376"
	Oct 14 14:24:11 ha-132600-m02 systemd[1]: Started Docker Application Container Engine.
	Oct 14 14:24:41 ha-132600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.916201367Z" level=info msg="Processing signal 'terminated'"
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.918697964Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919058063Z" level=info msg="Daemon shutdown complete"
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919215663Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919252063Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: docker.service: Deactivated successfully.
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 14 14:24:42 ha-132600-m02 dockerd[1075]: time="2024-10-14T14:24:42.978494717Z" level=info msg="Starting up"
	Oct 14 14:25:43 ha-132600-m02 dockerd[1075]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1014 07:25:43.062124   13076 out.go:270] * 
	W1014 07:25:43.062827   13076 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:25:43.067027   13076 out.go:201] 
	
	
	==> Docker <==
	Oct 14 14:22:32 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:22:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d7137fb8ec569089599a03e11d18eda509d210d29fa54b5e6a0a8d7dd7a54e7f/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 14:22:32 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:22:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/90ffeb0ce1aafcb639832f7144bd2dbb0b5c9deb53555b49e386b3d3e96d8bc3/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 14:22:32 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:22:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7278ecf7cc9466e1fee2170d699055f6498d2ece8da31c4b3696ed85b3cd38cf/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922383735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922463635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922481235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922580735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.996609949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.996807948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.996907948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.998275745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.069443031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.070022429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.070369527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.076841098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675114485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675249284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675264584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675974482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:17 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:26:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/91751edf004eb5236aa2443a5cff30bdc82ba05d39a3583d9026a4f19faba52f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 14 14:26:19 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:26:19Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.456978196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.457173796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.457616296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.457980197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2b8d44d586369       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago      Running             busybox                   0                   91751edf004eb       busybox-7dff88458-kr92j
	5a6196684fc6e       c69fa2e9cbf5f                                                                                         23 minutes ago      Running             coredns                   0                   7278ecf7cc946       coredns-7c65d6cfc9-4qfrq
	81d6fdac8115f       6e38f40d628db                                                                                         23 minutes ago      Running             storage-provisioner       0                   90ffeb0ce1aaf       storage-provisioner
	52c3e5370a6c8       c69fa2e9cbf5f                                                                                         23 minutes ago      Running             coredns                   0                   d7137fb8ec569       coredns-7c65d6cfc9-zf6cd
	dae2c0aa67af3       kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387              23 minutes ago      Running             kindnet-cni               0                   277e64b59b8aa       kindnet-rkjqr
	4745a4b0dc379       60c005f310ff3                                                                                         23 minutes ago      Running             kube-proxy                0                   15f605d31df67       kube-proxy-zkbj8
	b08f7a3c2a5a6       ghcr.io/kube-vip/kube-vip@sha256:9e23baad11ae3e69d739430b9fdb60df22356b7da4b4f4e458fae0541619deb4     24 minutes ago      Running             kube-vip                  0                   46cfd7b278d40       kube-vip-ha-132600
	84fd76d493d80       2e96e5913fc06                                                                                         24 minutes ago      Running             etcd                      0                   d71566d00e2f9       etcd-ha-132600
	b661cb6713103       6bab7719df100                                                                                         24 minutes ago      Running             kube-apiserver            0                   cb18be6165830       kube-apiserver-ha-132600
	35c870864a800       9aa1fad941575                                                                                         24 minutes ago      Running             kube-scheduler            0                   36593363b631a       kube-scheduler-ha-132600
	4a8cce31aa3a2       175ffd71cce3d                                                                                         24 minutes ago      Running             kube-controller-manager   0                   f5fa69fe68b07       kube-controller-manager-ha-132600
	
	
	==> coredns [52c3e5370a6c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41885 - 43638 "HINFO IN 5147527986633681541.7511835962423692488. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.048565283s
	[INFO] 10.244.0.4:49801 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0004036s
	[INFO] 10.244.0.4:57121 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.178873632s
	[INFO] 10.244.0.4:54985 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000301s
	[INFO] 10.244.0.4:47603 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000273799s
	[INFO] 10.244.0.4:50030 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000276399s
	[INFO] 10.244.0.4:38124 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000439399s
	[INFO] 10.244.0.4:43764 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000347699s
	[INFO] 10.244.0.4:53583 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0003446s
	[INFO] 10.244.0.4:50771 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0002502s
	[INFO] 10.244.0.4:32805 - 5 "PTR IN 1.96.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0002399s
	
	
	==> coredns [5a6196684fc6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50464 - 22593 "HINFO IN 8090798371432773748.7872157757733337044. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.084180423s
	[INFO] 10.244.0.4:46373 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.004105792s
	[INFO] 10.244.0.4:41175 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.166814894s
	[INFO] 10.244.0.4:43040 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002867s
	[INFO] 10.244.0.4:33549 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.045685805s
	[INFO] 10.244.0.4:60793 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000884598s
	[INFO] 10.244.0.4:57558 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001339s
	[INFO] 10.244.0.4:33138 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.040147217s
	[INFO] 10.244.0.4:46303 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0002673s
	[INFO] 10.244.0.4:60761 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001771s
	[INFO] 10.244.0.4:55567 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000249599s
	
	
	==> describe nodes <==
	Name:               ha-132600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-132600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=ha-132600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_14T07_22_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 14:22:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-132600
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:45:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 14:41:59 +0000   Mon, 14 Oct 2024 14:22:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 14:41:59 +0000   Mon, 14 Oct 2024 14:22:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 14:41:59 +0000   Mon, 14 Oct 2024 14:22:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 14:41:59 +0000   Mon, 14 Oct 2024 14:22:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.108.120
	  Hostname:    ha-132600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 07bcc58a0c7e4f0eaae96253dcf0a4ae
	  System UUID:                e06d00fc-5ebc-1a49-bf31-4c6ce500fe9c
	  Boot ID:                    215d765d-07e3-4bb7-ba3c-58ab142d5afa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kr92j              0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-7c65d6cfc9-4qfrq             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	  kube-system                 coredns-7c65d6cfc9-zf6cd             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	  kube-system                 etcd-ha-132600                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         23m
	  kube-system                 kindnet-rkjqr                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	  kube-system                 kube-apiserver-ha-132600             250m (12%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-controller-manager-ha-132600    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-proxy-zkbj8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-scheduler-ha-132600             100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-vip-ha-132600                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23m   kube-proxy       
	  Normal  Starting                 23m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  23m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  23m   kubelet          Node ha-132600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m   kubelet          Node ha-132600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m   kubelet          Node ha-132600 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           23m   node-controller  Node ha-132600 event: Registered Node ha-132600 in Controller
	  Normal  NodeReady                23m   kubelet          Node ha-132600 status is now: NodeReady
	
	
	Name:               ha-132600-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-132600-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=ha-132600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_14T07_42_14_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 14:42:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-132600-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:45:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 14:43:14 +0000   Mon, 14 Oct 2024 14:42:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 14:43:14 +0000   Mon, 14 Oct 2024 14:42:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 14:43:14 +0000   Mon, 14 Oct 2024 14:42:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 14:43:14 +0000   Mon, 14 Oct 2024 14:42:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.111.174
	  Hostname:    ha-132600-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 3833a05e76bc4842ac73a1d2223670ec
	  System UUID:                f1d27ade-116b-0642-ac95-727d62870b2a
	  Boot ID:                    07765716-ab52-4f7d-8d78-8fbd996c8cc0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8thz6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kindnet-dznf8              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m45s
	  kube-system                 kube-proxy-q6wxd           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m32s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m46s (x2 over 3m46s)  kubelet          Node ha-132600-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  3m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     3m45s                  cidrAllocator    Node ha-132600-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasNoDiskPressure    3m45s (x2 over 3m46s)  kubelet          Node ha-132600-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m45s (x2 over 3m46s)  kubelet          Node ha-132600-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m42s                  node-controller  Node ha-132600-m03 event: Registered Node ha-132600-m03 in Controller
	  Normal  NodeReady                3m13s                  kubelet          Node ha-132600-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.050916] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +45.956887] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.179434] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[Oct14 14:21] systemd-fstab-generator[997]: Ignoring "noauto" option for root device
	[  +0.095957] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.527367] systemd-fstab-generator[1037]: Ignoring "noauto" option for root device
	[  +0.185829] systemd-fstab-generator[1049]: Ignoring "noauto" option for root device
	[  +0.220100] systemd-fstab-generator[1063]: Ignoring "noauto" option for root device
	[  +2.802946] systemd-fstab-generator[1277]: Ignoring "noauto" option for root device
	[  +0.209648] systemd-fstab-generator[1289]: Ignoring "noauto" option for root device
	[  +0.207153] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.288223] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[ +11.411364] systemd-fstab-generator[1416]: Ignoring "noauto" option for root device
	[  +0.107418] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.752049] systemd-fstab-generator[1674]: Ignoring "noauto" option for root device
	[  +6.264384] systemd-fstab-generator[1823]: Ignoring "noauto" option for root device
	[  +0.102334] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.637673] kauditd_printk_skb: 67 callbacks suppressed
	[Oct14 14:22] systemd-fstab-generator[2317]: Ignoring "noauto" option for root device
	[  +5.137329] kauditd_printk_skb: 17 callbacks suppressed
	[  +8.104433] kauditd_printk_skb: 29 callbacks suppressed
	[Oct14 14:26] kauditd_printk_skb: 28 callbacks suppressed
	[Oct14 14:42] hrtimer: interrupt took 6612884 ns
	
	
	==> etcd [84fd76d493d8] <==
	{"level":"warn","ts":"2024-10-14T14:42:06.625714Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"233.064235ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T14:42:06.625791Z","caller":"traceutil/trace.go:171","msg":"trace[1441231401] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2578; }","duration":"233.140335ms","start":"2024-10-14T14:42:06.392638Z","end":"2024-10-14T14:42:06.625778Z","steps":["trace[1441231401] 'agreement among raft nodes before linearized reading'  (duration: 233.011936ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:06.838240Z","caller":"traceutil/trace.go:171","msg":"trace[129313711] linearizableReadLoop","detail":"{readStateIndex:2841; appliedIndex:2840; }","duration":"210.836289ms","start":"2024-10-14T14:42:06.627384Z","end":"2024-10-14T14:42:06.838220Z","steps":["trace[129313711] 'read index received'  (duration: 140.43006ms)","trace[129313711] 'applied index is now lower than readState.Index'  (duration: 70.404629ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-14T14:42:06.838564Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.106689ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T14:42:06.838765Z","caller":"traceutil/trace.go:171","msg":"trace[744257141] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2578; }","duration":"211.373088ms","start":"2024-10-14T14:42:06.627381Z","end":"2024-10-14T14:42:06.838754Z","steps":["trace[744257141] 'agreement among raft nodes before linearized reading'  (duration: 210.908189ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:07.474608Z","caller":"traceutil/trace.go:171","msg":"trace[1611887997] linearizableReadLoop","detail":"{readStateIndex:2842; appliedIndex:2841; }","duration":"173.05738ms","start":"2024-10-14T14:42:07.301532Z","end":"2024-10-14T14:42:07.474589Z","steps":["trace[1611887997] 'read index received'  (duration: 172.879381ms)","trace[1611887997] 'applied index is now lower than readState.Index'  (duration: 177.299µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-14T14:42:07.474817Z","caller":"traceutil/trace.go:171","msg":"trace[1935321534] transaction","detail":"{read_only:false; response_revision:2579; number_of_response:1; }","duration":"268.831248ms","start":"2024-10-14T14:42:07.205940Z","end":"2024-10-14T14:42:07.474771Z","steps":["trace[1935321534] 'process raft request'  (duration: 268.413149ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T14:42:07.475266Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.720679ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T14:42:07.475322Z","caller":"traceutil/trace.go:171","msg":"trace[213807606] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2579; }","duration":"173.789679ms","start":"2024-10-14T14:42:07.301524Z","end":"2024-10-14T14:42:07.475314Z","steps":["trace[213807606] 'agreement among raft nodes before linearized reading'  (duration: 173.704979ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:19.018144Z","caller":"traceutil/trace.go:171","msg":"trace[410087756] transaction","detail":"{read_only:false; response_revision:2632; number_of_response:1; }","duration":"175.067373ms","start":"2024-10-14T14:42:18.843055Z","end":"2024-10-14T14:42:19.018123Z","steps":["trace[410087756] 'process raft request'  (duration: 174.514675ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T14:42:19.375954Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.591721ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-132600-m03\" ","response":"range_response_count:1 size:2962"}
	{"level":"info","ts":"2024-10-14T14:42:19.377451Z","caller":"traceutil/trace.go:171","msg":"trace[946791809] range","detail":"{range_begin:/registry/minions/ha-132600-m03; range_end:; response_count:1; response_revision:2632; }","duration":"239.116617ms","start":"2024-10-14T14:42:19.138318Z","end":"2024-10-14T14:42:19.377435Z","steps":["trace[946791809] 'range keys from in-memory index tree'  (duration: 237.374121ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:25.277317Z","caller":"traceutil/trace.go:171","msg":"trace[682053374] linearizableReadLoop","detail":"{readStateIndex:2917; appliedIndex:2916; }","duration":"140.762656ms","start":"2024-10-14T14:42:25.136515Z","end":"2024-10-14T14:42:25.277278Z","steps":["trace[682053374] 'read index received'  (duration: 140.586757ms)","trace[682053374] 'applied index is now lower than readState.Index'  (duration: 175.299µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-14T14:42:25.277542Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.959056ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-132600-m03\" ","response":"range_response_count:1 size:3114"}
	{"level":"info","ts":"2024-10-14T14:42:25.277634Z","caller":"traceutil/trace.go:171","msg":"trace[820348600] range","detail":"{range_begin:/registry/minions/ha-132600-m03; range_end:; response_count:1; response_revision:2648; }","duration":"141.112155ms","start":"2024-10-14T14:42:25.136511Z","end":"2024-10-14T14:42:25.277623Z","steps":["trace[820348600] 'agreement among raft nodes before linearized reading'  (duration: 140.928056ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:25.279675Z","caller":"traceutil/trace.go:171","msg":"trace[858523099] transaction","detail":"{read_only:false; response_revision:2648; number_of_response:1; }","duration":"166.194393ms","start":"2024-10-14T14:42:25.113464Z","end":"2024-10-14T14:42:25.279658Z","steps":["trace[858523099] 'process raft request'  (duration: 163.709399ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:25.921423Z","caller":"traceutil/trace.go:171","msg":"trace[45131034] transaction","detail":"{read_only:false; response_revision:2649; number_of_response:1; }","duration":"240.489012ms","start":"2024-10-14T14:42:25.680919Z","end":"2024-10-14T14:42:25.921408Z","steps":["trace[45131034] 'process raft request'  (duration: 240.146113ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:31.076106Z","caller":"traceutil/trace.go:171","msg":"trace[1663891298] transaction","detail":"{read_only:false; response_revision:2663; number_of_response:1; }","duration":"120.948703ms","start":"2024-10-14T14:42:30.955138Z","end":"2024-10-14T14:42:31.076086Z","steps":["trace[1663891298] 'process raft request'  (duration: 120.791304ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T14:42:31.287943Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.945532ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-132600-m03\" ","response":"range_response_count:1 size:3114"}
	{"level":"info","ts":"2024-10-14T14:42:31.288066Z","caller":"traceutil/trace.go:171","msg":"trace[1149000350] range","detail":"{range_begin:/registry/minions/ha-132600-m03; range_end:; response_count:1; response_revision:2663; }","duration":"150.051932ms","start":"2024-10-14T14:42:31.137978Z","end":"2024-10-14T14:42:31.288030Z","steps":["trace[1149000350] 'range keys from in-memory index tree'  (duration: 149.753933ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:32.313366Z","caller":"traceutil/trace.go:171","msg":"trace[1148264888] linearizableReadLoop","detail":"{readStateIndex:2939; appliedIndex:2938; }","duration":"175.689169ms","start":"2024-10-14T14:42:32.137533Z","end":"2024-10-14T14:42:32.313222Z","steps":["trace[1148264888] 'read index received'  (duration: 175.27797ms)","trace[1148264888] 'applied index is now lower than readState.Index'  (duration: 410.299µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-14T14:42:32.313766Z","caller":"traceutil/trace.go:171","msg":"trace[53442055] transaction","detail":"{read_only:false; response_revision:2667; number_of_response:1; }","duration":"341.166363ms","start":"2024-10-14T14:42:31.972588Z","end":"2024-10-14T14:42:32.313754Z","steps":["trace[53442055] 'process raft request'  (duration: 340.380665ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T14:42:32.314054Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-14T14:42:31.972576Z","time spent":"341.263763ms","remote":"127.0.0.1:55456","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1094,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:2661 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1021 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-14T14:42:32.314975Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.510264ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-132600-m03\" ","response":"range_response_count:1 size:3114"}
	{"level":"info","ts":"2024-10-14T14:42:32.315174Z","caller":"traceutil/trace.go:171","msg":"trace[597120152] range","detail":"{range_begin:/registry/minions/ha-132600-m03; range_end:; response_count:1; response_revision:2667; }","duration":"177.711264ms","start":"2024-10-14T14:42:32.137452Z","end":"2024-10-14T14:42:32.315163Z","steps":["trace[597120152] 'agreement among raft nodes before linearized reading'  (duration: 177.444264ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:45:59 up 26 min,  0 users,  load average: 0.29, 0.40, 0.35
	Linux ha-132600 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [dae2c0aa67af] <==
	I1014 14:44:57.563302       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:45:07.570620       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:45:07.570726       1 main.go:300] handling current node
	I1014 14:45:07.570786       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:45:07.570798       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:45:17.563776       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:45:17.563842       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:45:17.564237       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:45:17.564276       1 main.go:300] handling current node
	I1014 14:45:27.572185       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:45:27.572294       1 main.go:300] handling current node
	I1014 14:45:27.572314       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:45:27.572323       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:45:37.572129       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:45:37.572255       1 main.go:300] handling current node
	I1014 14:45:37.572276       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:45:37.572286       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:45:47.571356       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:45:47.571543       1 main.go:300] handling current node
	I1014 14:45:47.571629       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:45:47.571657       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:45:57.572110       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:45:57.572226       1 main.go:300] handling current node
	I1014 14:45:57.572248       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:45:57.572257       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [b661cb671310] <==
	I1014 14:21:58.925388       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1014 14:21:58.925426       1 policy_source.go:224] refreshing policies
	I1014 14:21:58.928296       1 controller.go:615] quota admission added evaluator for: namespaces
	E1014 14:21:58.950810       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1014 14:21:59.174958       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 14:21:59.819127       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1014 14:21:59.829797       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1014 14:21:59.829835       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 14:22:00.942976       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 14:22:01.015766       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 14:22:01.150423       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1014 14:22:01.164901       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.20.108.120]
	I1014 14:22:01.165818       1 controller.go:615] quota admission added evaluator for: endpoints
	I1014 14:22:01.175247       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 14:22:01.835583       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1014 14:22:03.554010       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1014 14:22:03.589407       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1014 14:22:03.617368       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1014 14:22:07.346752       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1014 14:22:07.516653       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1014 14:38:03.250866       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57382: use of closed network connection
	E1014 14:38:04.469699       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57390: use of closed network connection
	E1014 14:38:05.596658       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57398: use of closed network connection
	E1014 14:38:40.228042       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57419: use of closed network connection
	E1014 14:38:50.673483       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57421: use of closed network connection
	
	
	==> kube-controller-manager [4a8cce31aa3a] <==
	I1014 14:41:59.832379       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600"
	I1014 14:42:14.020113       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-132600-m03\" does not exist"
	I1014 14:42:14.099308       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-132600-m03" podCIDRs=["10.244.1.0/24"]
	I1014 14:42:14.099376       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	E1014 14:42:14.131350       1 range_allocator.go:427] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"ha-132600-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.2.0/24\", \"10.244.1.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-132600-m03" podCIDRs=["10.244.2.0/24"]
	E1014 14:42:14.131490       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"ha-132600-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.2.0/24\", \"10.244.1.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-132600-m03"
	E1014 14:42:14.131543       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'ha-132600-m03': failed to patch node CIDR: Node \"ha-132600-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.2.0/24\", \"10.244.1.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I1014 14:42:14.132334       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:14.137556       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:14.252734       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:14.858650       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:17.081634       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-132600-m03"
	I1014 14:42:17.154621       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:24.267607       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:44.660569       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:46.187605       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-132600-m03"
	I1014 14:42:46.189657       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:46.206592       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:46.221387       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51µs"
	I1014 14:42:46.232286       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51.3µs"
	I1014 14:42:46.252360       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="196µs"
	I1014 14:42:47.108442       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:49.071073       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="17.613156ms"
	I1014 14:42:49.071582       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.8µs"
	I1014 14:43:14.925968       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	
	
	==> kube-proxy [4745a4b0dc37] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1014 14:22:09.024513       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1014 14:22:09.047174       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.20.108.120"]
	E1014 14:22:09.047308       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 14:22:09.124346       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 14:22:09.124494       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 14:22:09.124529       1 server_linux.go:169] "Using iptables Proxier"
	I1014 14:22:09.128809       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 14:22:09.129721       1 server.go:483] "Version info" version="v1.31.1"
	I1014 14:22:09.129805       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 14:22:09.131652       1 config.go:199] "Starting service config controller"
	I1014 14:22:09.131988       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 14:22:09.132269       1 config.go:105] "Starting endpoint slice config controller"
	I1014 14:22:09.132619       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 14:22:09.133592       1 config.go:328] "Starting node config controller"
	I1014 14:22:09.135184       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 14:22:09.232621       1 shared_informer.go:320] Caches are synced for service config
	I1014 14:22:09.233933       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 14:22:09.236054       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [35c870864a80] <==
	W1014 14:21:59.944424       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1014 14:21:59.944452       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:21:59.979078       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1014 14:21:59.979365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.111250       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1014 14:22:00.111664       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.192015       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1014 14:22:00.192346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.196984       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1014 14:22:00.197112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.197208       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1014 14:22:00.197241       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.212172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1014 14:22:00.212214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.260144       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1014 14:22:00.261002       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.276235       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1014 14:22:00.276419       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.295031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1014 14:22:00.295892       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.311664       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1014 14:22:00.313212       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1014 14:22:00.354351       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1014 14:22:00.355204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 14:22:02.117335       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 14 14:41:03 ha-132600 kubelet[2324]: E1014 14:41:03.675764    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:41:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:41:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:41:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:41:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:42:03 ha-132600 kubelet[2324]: E1014 14:42:03.676366    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:42:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:42:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:42:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:42:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:43:03 ha-132600 kubelet[2324]: E1014 14:43:03.675510    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:43:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:43:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:43:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:43:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:44:03 ha-132600 kubelet[2324]: E1014 14:44:03.676935    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:44:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:44:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:44:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:44:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:45:03 ha-132600 kubelet[2324]: E1014 14:45:03.682657    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:45:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:45:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:45:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:45:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-132600 -n ha-132600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-132600 -n ha-132600: (11.7534879s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-132600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-rng7p
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/CopyFile]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-132600 describe pod busybox-7dff88458-rng7p
helpers_test.go:282: (dbg) kubectl --context ha-132600 describe pod busybox-7dff88458-rng7p:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-rng7p
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9hpj2 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-9hpj2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                    From               Message
	  ----     ------            ----                   ----               -------
	  Warning  FailedScheduling  4m39s (x4 over 19m)    default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  3m16s (x2 over 3m26s)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/CopyFile (68.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (102.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-132600 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p ha-132600 node stop m02 -v=7 --alsologtostderr: (44.2447237s)
ha_test.go:371: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-132600 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-132600 status -v=7 --alsologtostderr: exit status 7 (25.4541633s)

                                                
                                                
-- stdout --
	ha-132600
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-132600-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-132600-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:46:56.720164   14240 out.go:345] Setting OutFile to fd 1380 ...
	I1014 07:46:56.720164   14240 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:46:56.720164   14240 out.go:358] Setting ErrFile to fd 1516...
	I1014 07:46:56.720164   14240 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:46:56.743821   14240 out.go:352] Setting JSON to false
	I1014 07:46:56.743904   14240 mustload.go:65] Loading cluster: ha-132600
	I1014 07:46:56.743904   14240 notify.go:220] Checking for updates...
	I1014 07:46:56.743904   14240 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:46:56.743904   14240 status.go:174] checking status of ha-132600 ...
	I1014 07:46:56.743904   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:46:58.877869   14240 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:46:58.877869   14240 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:46:58.877869   14240 status.go:371] ha-132600 host status = "Running" (err=<nil>)
	I1014 07:46:58.877869   14240 host.go:66] Checking if "ha-132600" exists ...
	I1014 07:46:58.878565   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:47:00.975108   14240 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:47:00.975362   14240 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:47:00.975482   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:47:03.484027   14240 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:47:03.484216   14240 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:47:03.484308   14240 host.go:66] Checking if "ha-132600" exists ...
	I1014 07:47:03.496324   14240 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 07:47:03.496324   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:47:05.568601   14240 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:47:05.569147   14240 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:47:05.569252   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:47:08.119842   14240 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:47:08.119917   14240 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:47:08.120043   14240 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:47:08.226509   14240 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7301785s)
	I1014 07:47:08.240144   14240 ssh_runner.go:195] Run: systemctl --version
	I1014 07:47:08.262202   14240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 07:47:08.286559   14240 kubeconfig.go:125] found "ha-132600" server: "https://172.20.111.254:8443"
	I1014 07:47:08.286615   14240 api_server.go:166] Checking apiserver status ...
	I1014 07:47:08.297308   14240 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 07:47:08.347479   14240 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2197/cgroup
	W1014 07:47:08.368276   14240 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2197/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1014 07:47:08.379506   14240 ssh_runner.go:195] Run: ls
	I1014 07:47:08.385836   14240 api_server.go:253] Checking apiserver healthz at https://172.20.111.254:8443/healthz ...
	I1014 07:47:08.395794   14240 api_server.go:279] https://172.20.111.254:8443/healthz returned 200:
	ok
	I1014 07:47:08.395961   14240 status.go:463] ha-132600 apiserver status = Running (err=<nil>)
	I1014 07:47:08.396041   14240 status.go:176] ha-132600 status: &{Name:ha-132600 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 07:47:08.396110   14240 status.go:174] checking status of ha-132600-m02 ...
	I1014 07:47:08.397645   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:47:10.482458   14240 main.go:141] libmachine: [stdout =====>] : Off
	
	I1014 07:47:10.483172   14240 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:47:10.483232   14240 status.go:371] ha-132600-m02 host status = "Stopped" (err=<nil>)
	I1014 07:47:10.483232   14240 status.go:384] host is not running, skipping remaining checks
	I1014 07:47:10.483232   14240 status.go:176] ha-132600-m02 status: &{Name:ha-132600-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 07:47:10.483232   14240 status.go:174] checking status of ha-132600-m03 ...
	I1014 07:47:10.483956   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m03 ).state
	I1014 07:47:12.604346   14240 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:47:12.604346   14240 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:47:12.605022   14240 status.go:371] ha-132600-m03 host status = "Running" (err=<nil>)
	I1014 07:47:12.605022   14240 host.go:66] Checking if "ha-132600-m03" exists ...
	I1014 07:47:12.606022   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m03 ).state
	I1014 07:47:14.743073   14240 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:47:14.743073   14240 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:47:14.744116   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m03 ).networkadapters[0]).ipaddresses[0]
	I1014 07:47:17.260525   14240 main.go:141] libmachine: [stdout =====>] : 172.20.111.174
	
	I1014 07:47:17.260758   14240 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:47:17.260851   14240 host.go:66] Checking if "ha-132600-m03" exists ...
	I1014 07:47:17.273090   14240 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 07:47:17.273090   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m03 ).state
	I1014 07:47:19.378875   14240 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:47:19.378875   14240 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:47:19.378968   14240 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m03 ).networkadapters[0]).ipaddresses[0]
	I1014 07:47:21.874836   14240 main.go:141] libmachine: [stdout =====>] : 172.20.111.174
	
	I1014 07:47:21.875037   14240 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:47:21.875213   14240 sshutil.go:53] new ssh client: &{IP:172.20.111.174 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m03\id_rsa Username:docker}
	I1014 07:47:21.979527   14240 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7064305s)
	I1014 07:47:21.993301   14240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 07:47:22.023280   14240 status.go:176] ha-132600-m03 status: &{Name:ha-132600-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-windows-amd64.exe -p ha-132600 status -v=7 --alsologtostderr": ha-132600
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-132600-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-132600-m03
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:380: status says not three hosts are running: args "out/minikube-windows-amd64.exe -p ha-132600 status -v=7 --alsologtostderr": ha-132600
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-132600-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-132600-m03
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:383: status says not three kubelets are running: args "out/minikube-windows-amd64.exe -p ha-132600 status -v=7 --alsologtostderr": ha-132600
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-132600-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-132600-m03
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:386: status says not two apiservers are running: args "out/minikube-windows-amd64.exe -p ha-132600 status -v=7 --alsologtostderr": ha-132600
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-132600-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-132600-m03
type: Worker
host: Running
kubelet: Running

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-132600 -n ha-132600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-132600 -n ha-132600: (11.9062533s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-132600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-132600 logs -n 25: (8.077329s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:37 PDT | 14 Oct 24 07:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:37 PDT | 14 Oct 24 07:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-8thz6 --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | busybox-7dff88458-kr92j --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-rng7p --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-8thz6 --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | busybox-7dff88458-kr92j --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-rng7p --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-8thz6 -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | busybox-7dff88458-kr92j -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-rng7p -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-8thz6              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | busybox-7dff88458-kr92j              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-kr92j -- sh        |           |                   |         |                     |                     |
	|         | -c ping -c 1 172.20.96.1             |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-rng7p              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| node    | add -p ha-132600 -v=7                | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:39 PDT | 14 Oct 24 07:42 PDT |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	| node    | ha-132600 node stop m02 -v=7         | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:46 PDT | 14 Oct 24 07:46 PDT |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 07:19:00
	Running on machine: minikube1
	Binary: Built with gc go1.23.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 07:19:00.342865   13076 out.go:345] Setting OutFile to fd 1228 ...
	I1014 07:19:00.344546   13076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:19:00.344546   13076 out.go:358] Setting ErrFile to fd 824...
	I1014 07:19:00.344546   13076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:19:00.368247   13076 out.go:352] Setting JSON to false
	I1014 07:19:00.372277   13076 start.go:129] hostinfo: {"hostname":"minikube1","uptime":101054,"bootTime":1728814485,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5011 Build 19045.5011","kernelVersion":"10.0.19045.5011 Build 19045.5011","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1014 07:19:00.372803   13076 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:19:00.379728   13076 out.go:177] * [ha-132600] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	I1014 07:19:00.386820   13076 notify.go:220] Checking for updates...
	I1014 07:19:00.389571   13076 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:19:00.392679   13076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:19:00.395167   13076 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1014 07:19:00.398285   13076 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:19:00.400520   13076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:19:00.404118   13076 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:19:05.654471   13076 out.go:177] * Using the hyperv driver based on user configuration
	I1014 07:19:05.658285   13076 start.go:297] selected driver: hyperv
	I1014 07:19:05.658285   13076 start.go:901] validating driver "hyperv" against <nil>
	I1014 07:19:05.658285   13076 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:19:05.705211   13076 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 07:19:05.706541   13076 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:19:05.706541   13076 cni.go:84] Creating CNI manager for ""
	I1014 07:19:05.706541   13076 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1014 07:19:05.706541   13076 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 07:19:05.707066   13076 start.go:340] cluster config:
	{Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:19:05.707207   13076 iso.go:125] acquiring lock: {Name:mkb71635bcbccba29dce9048ad4cc71430a7e577 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:19:05.711264   13076 out.go:177] * Starting "ha-132600" primary control-plane node in "ha-132600" cluster
	I1014 07:19:05.715097   13076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:19:05.715262   13076 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1014 07:19:05.715344   13076 cache.go:56] Caching tarball of preloaded images
	I1014 07:19:05.715452   13076 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1014 07:19:05.715452   13076 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:19:05.716553   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:19:05.716898   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json: {Name:mkbd11bf3f90adebf1f4630d2e8deaac328748d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:19:05.717886   13076 start.go:360] acquireMachinesLock for ha-132600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:19:05.717886   13076 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-132600"
	I1014 07:19:05.718562   13076 start.go:93] Provisioning new machine with config: &{Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:19:05.718562   13076 start.go:125] createHost starting for "" (driver="hyperv")
	I1014 07:19:05.721891   13076 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:19:05.722555   13076 start.go:159] libmachine.API.Create for "ha-132600" (driver="hyperv")
	I1014 07:19:05.722555   13076 client.go:168] LocalClient.Create starting
	I1014 07:19:05.722555   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I1014 07:19:05.723316   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:19:05.723316   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:19:05.723316   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I1014 07:19:05.723877   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:19:05.723955   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:19:05.723955   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1014 07:19:07.752927   13076 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1014 07:19:07.753576   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:07.753793   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1014 07:19:09.445838   13076 main.go:141] libmachine: [stdout =====>] : False
	
	I1014 07:19:09.445838   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:09.446315   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:19:10.901034   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:19:10.901034   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:10.901034   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:19:14.320078   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:19:14.320309   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:14.323253   13076 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 07:19:14.829443   13076 main.go:141] libmachine: Creating SSH key...
	I1014 07:19:14.988345   13076 main.go:141] libmachine: Creating VM...
	I1014 07:19:14.988345   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:19:17.790932   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:19:17.791005   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:17.791145   13076 main.go:141] libmachine: Using switch "Default Switch"
	I1014 07:19:17.791145   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:19:19.529861   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:19:19.529861   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:19.529861   13076 main.go:141] libmachine: Creating VHD
	I1014 07:19:19.530436   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\fixed.vhd' -SizeBytes 10MB -Fixed
	I1014 07:19:23.105904   13076 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 267BC11D-A195-4636-8AE9-EF1D7966C95C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1014 07:19:23.105987   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:23.106037   13076 main.go:141] libmachine: Writing magic tar header
	I1014 07:19:23.106037   13076 main.go:141] libmachine: Writing SSH key tar header
	I1014 07:19:23.118040   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\disk.vhd' -VHDType Dynamic -DeleteSource
	I1014 07:19:26.201609   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:26.201950   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:26.202023   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\disk.vhd' -SizeBytes 20000MB
	I1014 07:19:28.768368   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:28.768789   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:28.768969   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-132600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1014 07:19:32.226690   13076 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-132600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1014 07:19:32.226915   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:32.226915   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-132600 -DynamicMemoryEnabled $false
	I1014 07:19:34.390731   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:34.390837   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:34.391049   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-132600 -Count 2
	I1014 07:19:36.449382   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:36.449628   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:36.449628   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-132600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\boot2docker.iso'
	I1014 07:19:38.932909   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:38.932909   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:38.932909   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-132600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\disk.vhd'
	I1014 07:19:41.509229   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:41.509229   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:41.509229   13076 main.go:141] libmachine: Starting VM...
	I1014 07:19:41.509229   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-132600
	I1014 07:19:44.718680   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:44.718680   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:44.719475   13076 main.go:141] libmachine: Waiting for host to start...
	I1014 07:19:44.719770   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:19:46.954823   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:19:46.954823   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:46.955829   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:19:49.408227   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:49.408227   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:50.409369   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:19:52.561607   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:19:52.561912   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:52.561912   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:19:55.081136   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:55.081187   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:56.082401   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:19:58.271497   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:19:58.271497   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:58.272384   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:00.740863   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:20:00.741070   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:01.741536   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:03.902654   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:03.902721   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:03.902721   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:06.378064   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:20:06.378064   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:07.379104   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:09.537329   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:09.537329   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:09.537541   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:12.074320   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:12.074320   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:12.074834   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:14.171613   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:14.171613   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:14.171613   13076 machine.go:93] provisionDockerMachine start ...
	I1014 07:20:14.172377   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:16.225265   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:16.225265   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:16.225634   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:18.692061   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:18.692061   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:18.698261   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:18.713495   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:18.713568   13076 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 07:20:18.839311   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 07:20:18.839517   13076 buildroot.go:166] provisioning hostname "ha-132600"
	I1014 07:20:18.839517   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:20.911254   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:20.911424   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:20.911601   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:23.370138   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:23.370756   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:23.377008   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:23.377773   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:23.377773   13076 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-132600 && echo "ha-132600" | sudo tee /etc/hostname
	I1014 07:20:23.537548   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-132600
	
	I1014 07:20:23.537548   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:25.571429   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:25.571429   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:25.572326   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:28.068677   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:28.068677   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:28.077026   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:28.078615   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:28.078615   13076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-132600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-132600/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-132600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 07:20:28.216809   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 07:20:28.216874   13076 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1014 07:20:28.216990   13076 buildroot.go:174] setting up certificates
	I1014 07:20:28.217016   13076 provision.go:84] configureAuth start
	I1014 07:20:28.217047   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:30.246758   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:30.247093   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:30.247368   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:32.688428   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:32.688428   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:32.688428   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:34.748908   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:34.748908   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:34.749349   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:37.221628   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:37.221798   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:37.221798   13076 provision.go:143] copyHostCerts
	I1014 07:20:37.221998   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1014 07:20:37.222444   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1014 07:20:37.222558   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1014 07:20:37.223110   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1014 07:20:37.224109   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1014 07:20:37.224599   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1014 07:20:37.224599   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1014 07:20:37.224959   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I1014 07:20:37.225332   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1014 07:20:37.225332   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1014 07:20:37.225332   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1014 07:20:37.226765   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1014 07:20:37.227733   13076 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-132600 san=[127.0.0.1 172.20.108.120 ha-132600 localhost minikube]
	I1014 07:20:37.731674   13076 provision.go:177] copyRemoteCerts
	I1014 07:20:37.744706   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 07:20:37.744706   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:39.801309   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:39.801309   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:39.801309   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:42.273306   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:42.273367   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:42.273367   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:20:42.378794   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6340227s)
	I1014 07:20:42.378847   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1014 07:20:42.378847   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I1014 07:20:42.424772   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1014 07:20:42.425289   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 07:20:42.471397   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1014 07:20:42.471964   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 07:20:42.527104   13076 provision.go:87] duration metric: took 14.3100395s to configureAuth
	I1014 07:20:42.527104   13076 buildroot.go:189] setting minikube options for container-runtime
	I1014 07:20:42.527712   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:20:42.528252   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:44.598308   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:44.598308   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:44.599305   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:47.040424   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:47.040637   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:47.045726   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:47.046398   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:47.046398   13076 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1014 07:20:47.167710   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1014 07:20:47.167801   13076 buildroot.go:70] root file system type: tmpfs
	I1014 07:20:47.168208   13076 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1014 07:20:47.168315   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:49.202111   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:49.202331   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:49.202424   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:51.648278   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:51.648278   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:51.654248   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:51.655052   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:51.655052   13076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1014 07:20:51.814048   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1014 07:20:51.814229   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:53.871643   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:53.871643   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:53.872030   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:56.286054   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:56.286054   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:56.292357   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:56.293125   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:56.293125   13076 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1014 07:20:58.458028   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1014 07:20:58.458028   13076 machine.go:96] duration metric: took 44.286361s to provisionDockerMachine
	I1014 07:20:58.458028   13076 client.go:171] duration metric: took 1m52.7353371s to LocalClient.Create
	I1014 07:20:58.458028   13076 start.go:167] duration metric: took 1m52.7353371s to libmachine.API.Create "ha-132600"
	I1014 07:20:58.458028   13076 start.go:293] postStartSetup for "ha-132600" (driver="hyperv")
	I1014 07:20:58.458028   13076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 07:20:58.471262   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 07:20:58.471262   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:00.512590   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:00.513162   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:00.513321   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:02.924795   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:02.924929   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:02.925163   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:21:03.032582   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5613141s)
	I1014 07:21:03.044373   13076 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 07:21:03.050831   13076 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 07:21:03.050831   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1014 07:21:03.051271   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1014 07:21:03.052198   13076 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> 9362.pem in /etc/ssl/certs
	I1014 07:21:03.052295   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /etc/ssl/certs/9362.pem
	I1014 07:21:03.063873   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 07:21:03.081968   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /etc/ssl/certs/9362.pem (1708 bytes)
	I1014 07:21:03.128637   13076 start.go:296] duration metric: took 4.6706031s for postStartSetup
	I1014 07:21:03.132031   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:05.196102   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:05.196622   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:05.196696   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:07.624940   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:07.625781   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:07.626103   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:21:07.629046   13076 start.go:128] duration metric: took 2m1.9103356s to createHost
	I1014 07:21:07.629046   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:09.687162   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:09.687420   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:09.687490   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:12.104556   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:12.104556   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:12.110165   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:21:12.110572   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:21:12.110572   13076 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 07:21:12.241524   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728915672.238854784
	
	I1014 07:21:12.241655   13076 fix.go:216] guest clock: 1728915672.238854784
	I1014 07:21:12.241726   13076 fix.go:229] Guest: 2024-10-14 07:21:12.238854784 -0700 PDT Remote: 2024-10-14 07:21:07.6290467 -0700 PDT m=+127.381500801 (delta=4.609808084s)
	I1014 07:21:12.241841   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:14.299338   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:14.299599   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:14.299599   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:16.767477   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:16.767665   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:16.775565   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:21:16.775565   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:21:16.775565   13076 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1728915672
	I1014 07:21:16.921038   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 14 14:21:12 UTC 2024
	
	I1014 07:21:16.921096   13076 fix.go:236] clock set: Mon Oct 14 14:21:12 UTC 2024
	 (err=<nil>)
	I1014 07:21:16.921169   13076 start.go:83] releasing machines lock for "ha-132600", held for 2m11.2030498s
	I1014 07:21:16.921319   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:18.988904   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:18.989898   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:18.989947   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:21.410167   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:21.410403   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:21.414482   13076 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1014 07:21:21.414683   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:21.423844   13076 ssh_runner.go:195] Run: cat /version.json
	I1014 07:21:21.423844   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:23.554001   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:23.554206   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:23.554362   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:23.565882   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:23.565882   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:23.565882   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:26.092249   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:26.092249   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:26.092379   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:21:26.119019   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:26.119086   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:26.119215   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:21:26.179573   13076 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.764971s)
	W1014 07:21:26.179670   13076 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1014 07:21:26.212879   13076 ssh_runner.go:235] Completed: cat /version.json: (4.7890281s)
	I1014 07:21:26.223815   13076 ssh_runner.go:195] Run: systemctl --version
	I1014 07:21:26.243251   13076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 07:21:26.251321   13076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 07:21:26.262431   13076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 07:21:26.290027   13076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 07:21:26.290027   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:21:26.290027   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1014 07:21:26.296743   13076 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1014 07:21:26.296743   13076 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1014 07:21:26.340326   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1014 07:21:26.370362   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1014 07:21:26.391878   13076 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 07:21:26.402847   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 07:21:26.433241   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:21:26.462619   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 07:21:26.491735   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:21:26.520404   13076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 07:21:26.550615   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 07:21:26.580278   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 07:21:26.608564   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 07:21:26.636849   13076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 07:21:26.656448   13076 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 07:21:26.667504   13076 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 07:21:26.700184   13076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 07:21:26.726834   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:26.916163   13076 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 07:21:26.953200   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:21:26.967627   13076 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1014 07:21:27.005080   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:21:27.039193   13076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 07:21:27.083701   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:21:27.118670   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:21:27.153508   13076 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 07:21:27.209689   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:21:27.230765   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:21:27.273213   13076 ssh_runner.go:195] Run: which cri-dockerd
	I1014 07:21:27.291783   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1014 07:21:27.308581   13076 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1014 07:21:27.351709   13076 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1014 07:21:27.539571   13076 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1014 07:21:27.725567   13076 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1014 07:21:27.725567   13076 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1014 07:21:27.768723   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:27.951142   13076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:21:30.486027   13076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5347969s)
	I1014 07:21:30.497722   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1014 07:21:30.531680   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 07:21:30.563742   13076 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1014 07:21:30.767218   13076 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1014 07:21:30.967093   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:31.191192   13076 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1014 07:21:31.229757   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 07:21:31.267426   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:31.462408   13076 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1014 07:21:31.569509   13076 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1014 07:21:31.581151   13076 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1014 07:21:31.591322   13076 start.go:563] Will wait 60s for crictl version
	I1014 07:21:31.604845   13076 ssh_runner.go:195] Run: which crictl
	I1014 07:21:31.622874   13076 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 07:21:31.680621   13076 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1014 07:21:31.689423   13076 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 07:21:31.732594   13076 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 07:21:31.766375   13076 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1014 07:21:31.766375   13076 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:08:65:4d Flags:up|broadcast|multicast|running}
	I1014 07:21:31.773955   13076 ip.go:214] interface addr: fe80::b548:ff79:d464:75b8/64
	I1014 07:21:31.773955   13076 ip.go:214] interface addr: 172.20.96.1/20
	I1014 07:21:31.784171   13076 ssh_runner.go:195] Run: grep 172.20.96.1	host.minikube.internal$ /etc/hosts
	I1014 07:21:31.791192   13076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 07:21:31.825649   13076 kubeadm.go:883] updating cluster {Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 07:21:31.825649   13076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:21:31.834667   13076 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 07:21:31.867707   13076 docker.go:689] Got preloaded images: 
	I1014 07:21:31.867707   13076 docker.go:695] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I1014 07:21:31.880238   13076 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1014 07:21:31.908893   13076 ssh_runner.go:195] Run: which lz4
	I1014 07:21:31.914374   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1014 07:21:31.924715   13076 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 07:21:31.930846   13076 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 07:21:31.931075   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I1014 07:21:33.829320   13076 docker.go:653] duration metric: took 1.9148657s to copy over tarball
	I1014 07:21:33.840382   13076 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 07:21:42.539043   13076 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6986506s)
	I1014 07:21:42.539174   13076 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 07:21:42.611065   13076 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1014 07:21:42.636639   13076 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I1014 07:21:42.681041   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:42.879066   13076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:21:46.161713   13076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.2824935s)
	I1014 07:21:46.173131   13076 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 07:21:46.198645   13076 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1014 07:21:46.198645   13076 cache_images.go:84] Images are preloaded, skipping loading
	I1014 07:21:46.198645   13076 kubeadm.go:934] updating node { 172.20.108.120 8443 v1.31.1 docker true true} ...
	I1014 07:21:46.198645   13076 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-132600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.108.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 07:21:46.209892   13076 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1014 07:21:46.283733   13076 cni.go:84] Creating CNI manager for ""
	I1014 07:21:46.283861   13076 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 07:21:46.283932   13076 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 07:21:46.284000   13076 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.108.120 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-132600 NodeName:ha-132600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.108.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.108.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 07:21:46.284290   13076 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.108.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-132600"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.20.108.120"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.108.120"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 07:21:46.284367   13076 kube-vip.go:115] generating kube-vip config ...
	I1014 07:21:46.295693   13076 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1014 07:21:46.324017   13076 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1014 07:21:46.324230   13076 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.20.111.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1014 07:21:46.334184   13076 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 07:21:46.349321   13076 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 07:21:46.360771   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 07:21:46.379224   13076 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I1014 07:21:46.412080   13076 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 07:21:46.441691   13076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1014 07:21:46.479025   13076 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1014 07:21:46.519461   13076 ssh_runner.go:195] Run: grep 172.20.111.254	control-plane.minikube.internal$ /etc/hosts
	I1014 07:21:46.525021   13076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.111.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 07:21:46.554943   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:46.730466   13076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 07:21:46.761153   13076 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600 for IP: 172.20.108.120
	I1014 07:21:46.761153   13076 certs.go:194] generating shared ca certs ...
	I1014 07:21:46.761153   13076 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:46.761825   13076 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I1014 07:21:46.762680   13076 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I1014 07:21:46.762846   13076 certs.go:256] generating profile certs ...
	I1014 07:21:46.763605   13076 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.key
	I1014 07:21:46.763605   13076 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.crt with IP's: []
	I1014 07:21:46.955865   13076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.crt ...
	I1014 07:21:46.955865   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.crt: {Name:mk4a5e51a4b9d62279c7ef4ef47716bcdf88d704 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:46.957857   13076 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.key ...
	I1014 07:21:46.957857   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.key: {Name:mkc80e99cfe4d8d17198a87fb188cfa1a72d727d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:46.958881   13076 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61
	I1014 07:21:46.958881   13076 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.108.120 172.20.111.254]
	I1014 07:21:47.375660   13076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61 ...
	I1014 07:21:47.375660   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61: {Name:mke858ab0ac103663123708f9814cfdffe03211e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.377216   13076 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61 ...
	I1014 07:21:47.377216   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61: {Name:mk972fca892a193d6b4e84d42e27ed86ce9f0457 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.377811   13076 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt
	I1014 07:21:47.391806   13076 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key
	I1014 07:21:47.392823   13076 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key
	I1014 07:21:47.392823   13076 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt with IP's: []
	I1014 07:21:47.675751   13076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt ...
	I1014 07:21:47.675751   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt: {Name:mk5a9f1239213dd0a0f36b4ee41d5ac56cbd79c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.677918   13076 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key ...
	I1014 07:21:47.677918   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key: {Name:mkb931094180fef1516b0d43acb6430ecbcbb3fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 07:21:47.679943   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 07:21:47.679943   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 07:21:47.679943   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 07:21:47.691415   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 07:21:47.693468   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem (1338 bytes)
	W1014 07:21:47.693468   13076 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936_empty.pem, impossibly tiny 0 bytes
	I1014 07:21:47.694004   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1014 07:21:47.694400   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1014 07:21:47.694671   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1014 07:21:47.695091   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1014 07:21:47.695960   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem (1708 bytes)
	I1014 07:21:47.696145   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem -> /usr/share/ca-certificates/936.pem
	I1014 07:21:47.696401   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /usr/share/ca-certificates/9362.pem
	I1014 07:21:47.696580   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:47.697580   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 07:21:47.742705   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 07:21:47.790988   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 07:21:47.837975   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 07:21:47.884872   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 07:21:47.936878   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 07:21:47.985330   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 07:21:48.029181   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 07:21:48.078824   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem --> /usr/share/ca-certificates/936.pem (1338 bytes)
	I1014 07:21:48.127907   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /usr/share/ca-certificates/9362.pem (1708 bytes)
	I1014 07:21:48.174542   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 07:21:48.218635   13076 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 07:21:48.264009   13076 ssh_runner.go:195] Run: openssl version
	I1014 07:21:48.282939   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 07:21:48.310256   13076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:48.318050   13076 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:44 /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:48.328768   13076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:48.348090   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 07:21:48.376484   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936.pem && ln -fs /usr/share/ca-certificates/936.pem /etc/ssl/certs/936.pem"
	I1014 07:21:48.408494   13076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936.pem
	I1014 07:21:48.416513   13076 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 14:00 /usr/share/ca-certificates/936.pem
	I1014 07:21:48.427815   13076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936.pem
	I1014 07:21:48.447033   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/936.pem /etc/ssl/certs/51391683.0"
	I1014 07:21:48.474293   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9362.pem && ln -fs /usr/share/ca-certificates/9362.pem /etc/ssl/certs/9362.pem"
	I1014 07:21:48.502293   13076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9362.pem
	I1014 07:21:48.509688   13076 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 14:00 /usr/share/ca-certificates/9362.pem
	I1014 07:21:48.519811   13076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9362.pem
	I1014 07:21:48.540403   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9362.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 07:21:48.569007   13076 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 07:21:48.575758   13076 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 07:21:48.576118   13076 kubeadm.go:392] StartCluster: {Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clu
sterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:21:48.584978   13076 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1014 07:21:48.619430   13076 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 07:21:48.647606   13076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 07:21:48.676291   13076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 07:21:48.693679   13076 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 07:21:48.693679   13076 kubeadm.go:157] found existing configuration files:
	
	I1014 07:21:48.703613   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 07:21:48.722067   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 07:21:48.732618   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 07:21:48.759836   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 07:21:48.777622   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 07:21:48.789748   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 07:21:48.818512   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 07:21:48.837396   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 07:21:48.848696   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 07:21:48.876926   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 07:21:48.893530   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 07:21:48.904372   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 07:21:48.920210   13076 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 07:21:49.392185   13076 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 07:22:04.140256   13076 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 07:22:04.140412   13076 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 07:22:04.140501   13076 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 07:22:04.140848   13076 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 07:22:04.141206   13076 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 07:22:04.141311   13076 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 07:22:04.144180   13076 out.go:235]   - Generating certificates and keys ...
	I1014 07:22:04.144373   13076 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 07:22:04.144575   13076 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 07:22:04.144939   13076 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 07:22:04.145172   13076 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1014 07:22:04.145315   13076 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1014 07:22:04.145521   13076 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1014 07:22:04.145845   13076 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1014 07:22:04.146212   13076 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-132600 localhost] and IPs [172.20.108.120 127.0.0.1 ::1]
	I1014 07:22:04.146365   13076 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1014 07:22:04.146614   13076 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-132600 localhost] and IPs [172.20.108.120 127.0.0.1 ::1]
	I1014 07:22:04.147003   13076 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 07:22:04.147200   13076 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 07:22:04.147236   13076 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1014 07:22:04.147236   13076 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 07:22:04.147236   13076 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 07:22:04.147236   13076 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 07:22:04.147954   13076 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 07:22:04.147987   13076 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 07:22:04.147987   13076 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 07:22:04.147987   13076 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 07:22:04.148785   13076 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 07:22:04.151235   13076 out.go:235]   - Booting up control plane ...
	I1014 07:22:04.151235   13076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 07:22:04.151819   13076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 07:22:04.152046   13076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 07:22:04.152333   13076 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 07:22:04.153426   13076 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 523.263843ms
	I1014 07:22:04.153534   13076 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 07:22:04.153534   13076 kubeadm.go:310] [api-check] The API server is healthy after 9.176461865s
	I1014 07:22:04.153534   13076 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 07:22:04.154243   13076 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 07:22:04.154243   13076 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 07:22:04.154779   13076 kubeadm.go:310] [mark-control-plane] Marking the node ha-132600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 07:22:04.154779   13076 kubeadm.go:310] [bootstrap-token] Using token: rm3d8a.inqsemtt5b5l5x2e
	I1014 07:22:04.157926   13076 out.go:235]   - Configuring RBAC rules ...
	I1014 07:22:04.158760   13076 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 07:22:04.158928   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 07:22:04.158928   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 07:22:04.159656   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 07:22:04.159851   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 07:22:04.160089   13076 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 07:22:04.160270   13076 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 07:22:04.160529   13076 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 07:22:04.160529   13076 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 07:22:04.160529   13076 kubeadm.go:310] 
	I1014 07:22:04.160529   13076 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 07:22:04.160529   13076 kubeadm.go:310] 
	I1014 07:22:04.160529   13076 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 07:22:04.160529   13076 kubeadm.go:310] 
	I1014 07:22:04.161100   13076 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 07:22:04.161186   13076 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 07:22:04.161352   13076 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 07:22:04.161352   13076 kubeadm.go:310] 
	I1014 07:22:04.161534   13076 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 07:22:04.161534   13076 kubeadm.go:310] 
	I1014 07:22:04.161706   13076 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 07:22:04.161706   13076 kubeadm.go:310] 
	I1014 07:22:04.161950   13076 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 07:22:04.161950   13076 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 07:22:04.161950   13076 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 07:22:04.161950   13076 kubeadm.go:310] 
	I1014 07:22:04.162515   13076 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 07:22:04.162607   13076 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 07:22:04.162607   13076 kubeadm.go:310] 
	I1014 07:22:04.162607   13076 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rm3d8a.inqsemtt5b5l5x2e \
	I1014 07:22:04.162607   13076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f876f951efbe7f1ed47ca578ee0d33e6ce8fe69c1a008a8154ee00f56ac84e46 \
	I1014 07:22:04.163162   13076 kubeadm.go:310] 	--control-plane 
	I1014 07:22:04.163261   13076 kubeadm.go:310] 
	I1014 07:22:04.163340   13076 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 07:22:04.163340   13076 kubeadm.go:310] 
	I1014 07:22:04.163570   13076 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rm3d8a.inqsemtt5b5l5x2e \
	I1014 07:22:04.163570   13076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f876f951efbe7f1ed47ca578ee0d33e6ce8fe69c1a008a8154ee00f56ac84e46 
	I1014 07:22:04.163570   13076 cni.go:84] Creating CNI manager for ""
	I1014 07:22:04.163570   13076 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 07:22:04.167124   13076 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1014 07:22:04.179134   13076 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 07:22:04.187701   13076 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1014 07:22:04.187701   13076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1014 07:22:04.233053   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1014 07:22:04.912894   13076 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 07:22:04.925574   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-132600 minikube.k8s.io/updated_at=2024_10_14T07_22_04_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=ha-132600 minikube.k8s.io/primary=true
	I1014 07:22:04.927795   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:04.958241   13076 ops.go:34] apiserver oom_adj: -16
	I1014 07:22:05.168034   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:05.667048   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:06.166157   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:06.667561   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:07.167283   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:07.305312   13076 kubeadm.go:1113] duration metric: took 2.3924152s to wait for elevateKubeSystemPrivileges
	I1014 07:22:07.306284   13076 kubeadm.go:394] duration metric: took 18.730142s to StartCluster
	I1014 07:22:07.306284   13076 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:22:07.306284   13076 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:22:07.308077   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:22:07.309777   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 07:22:07.309777   13076 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:22:07.309953   13076 start.go:241] waiting for startup goroutines ...
	I1014 07:22:07.309868   13076 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 07:22:07.310143   13076 addons.go:69] Setting storage-provisioner=true in profile "ha-132600"
	I1014 07:22:07.310235   13076 addons.go:234] Setting addon storage-provisioner=true in "ha-132600"
	I1014 07:22:07.310143   13076 addons.go:69] Setting default-storageclass=true in profile "ha-132600"
	I1014 07:22:07.310402   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:22:07.310402   13076 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-132600"
	I1014 07:22:07.310402   13076 host.go:66] Checking if "ha-132600" exists ...
	I1014 07:22:07.311457   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:07.311926   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:07.495748   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.96.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 07:22:08.130501   13076 start.go:971] {"host.minikube.internal": 172.20.96.1} host record injected into CoreDNS's ConfigMap
	I1014 07:22:09.594501   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:09.594501   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:09.594786   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:09.594878   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:09.595740   13076 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:22:09.596691   13076 kapi.go:59] client config for ha-132600: &rest.Config{Host:"https://172.20.111.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-132600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-132600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2926ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 07:22:09.597939   13076 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:22:09.598267   13076 cert_rotation.go:140] Starting client certificate rotation controller
	I1014 07:22:09.598703   13076 addons.go:234] Setting addon default-storageclass=true in "ha-132600"
	I1014 07:22:09.598956   13076 host.go:66] Checking if "ha-132600" exists ...
	I1014 07:22:09.600143   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:09.600143   13076 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 07:22:09.600247   13076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 07:22:09.600352   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:11.901872   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:11.901872   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:11.902855   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:22:11.913525   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:11.914554   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:11.914725   13076 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 07:22:11.914841   13076 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 07:22:11.915002   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:14.163865   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:14.163865   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:14.163865   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:22:14.597752   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:22:14.597752   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:14.598273   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:22:14.755906   13076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 07:22:16.766331   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:22:16.766798   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:16.767079   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:22:16.907784   13076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 07:22:17.107355   13076 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 07:22:17.107355   13076 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 07:22:17.108322   13076 round_trippers.go:463] GET https://172.20.111.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1014 07:22:17.108322   13076 round_trippers.go:469] Request Headers:
	I1014 07:22:17.108322   13076 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:22:17.108322   13076 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:22:17.120122   13076 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1014 07:22:17.120927   13076 round_trippers.go:463] PUT https://172.20.111.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1014 07:22:17.121006   13076 round_trippers.go:469] Request Headers:
	I1014 07:22:17.121006   13076 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:22:17.121006   13076 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:22:17.121089   13076 round_trippers.go:473]     Content-Type: application/json
	I1014 07:22:17.124876   13076 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 07:22:17.128782   13076 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1014 07:22:17.133792   13076 addons.go:510] duration metric: took 9.8239112s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1014 07:22:17.133792   13076 start.go:246] waiting for cluster config update ...
	I1014 07:22:17.133792   13076 start.go:255] writing updated cluster config ...
	I1014 07:22:17.140793   13076 out.go:201] 
	I1014 07:22:17.150794   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:22:17.151784   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:22:17.157758   13076 out.go:177] * Starting "ha-132600-m02" control-plane node in "ha-132600" cluster
	I1014 07:22:17.160381   13076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:22:17.160381   13076 cache.go:56] Caching tarball of preloaded images
	I1014 07:22:17.160381   13076 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1014 07:22:17.160381   13076 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:22:17.160381   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:22:17.168672   13076 start.go:360] acquireMachinesLock for ha-132600-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:22:17.169194   13076 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-132600-m02"
	I1014 07:22:17.169334   13076 start.go:93] Provisioning new machine with config: &{Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:22:17.169334   13076 start.go:125] createHost starting for "m02" (driver="hyperv")
	I1014 07:22:17.171957   13076 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:22:17.171957   13076 start.go:159] libmachine.API.Create for "ha-132600" (driver="hyperv")
	I1014 07:22:17.171957   13076 client.go:168] LocalClient.Create starting
	I1014 07:22:17.171957   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:22:17.173802   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:22:17.173802   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1014 07:22:19.094383   13076 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1014 07:22:19.094383   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:19.094970   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1014 07:22:20.808796   13076 main.go:141] libmachine: [stdout =====>] : False
	
	I1014 07:22:20.808796   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:20.808897   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:22:22.292959   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:22:22.292959   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:22.293074   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:22:25.811891   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:22:25.811891   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:25.813583   13076 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 07:22:26.334218   13076 main.go:141] libmachine: Creating SSH key...
	I1014 07:22:26.579833   13076 main.go:141] libmachine: Creating VM...
	I1014 07:22:26.580305   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:22:29.402237   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:22:29.402237   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:29.402237   13076 main.go:141] libmachine: Using switch "Default Switch"
	I1014 07:22:29.402237   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:22:31.181025   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:22:31.181025   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:31.181025   13076 main.go:141] libmachine: Creating VHD
	I1014 07:22:31.181025   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I1014 07:22:34.986705   13076 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 9BEBB030-6569-4A5D-A43D-C976E242B24D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1014 07:22:34.986953   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:34.986953   13076 main.go:141] libmachine: Writing magic tar header
	I1014 07:22:34.986953   13076 main.go:141] libmachine: Writing SSH key tar header
	I1014 07:22:34.998864   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I1014 07:22:38.117497   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:38.117497   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:38.117497   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\disk.vhd' -SizeBytes 20000MB
	I1014 07:22:40.750048   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:40.750048   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:40.750048   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-132600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1014 07:22:44.293371   13076 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-132600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1014 07:22:44.294009   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:44.294158   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-132600-m02 -DynamicMemoryEnabled $false
	I1014 07:22:46.495846   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:46.496106   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:46.496211   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-132600-m02 -Count 2
	I1014 07:22:48.637702   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:48.637702   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:48.637702   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-132600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\boot2docker.iso'
	I1014 07:22:51.128606   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:51.128606   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:51.128606   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-132600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\disk.vhd'
	I1014 07:22:53.765202   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:53.765775   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:53.765775   13076 main.go:141] libmachine: Starting VM...
	I1014 07:22:53.765847   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-132600-m02
	I1014 07:22:56.961396   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:56.961396   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:56.962040   13076 main.go:141] libmachine: Waiting for host to start...
	I1014 07:22:56.962040   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:22:59.251592   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:59.251592   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:59.251838   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:01.758067   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:01.758159   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:02.758293   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:04.921490   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:04.921565   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:04.921802   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:07.441595   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:07.441595   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:08.443053   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:10.659721   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:10.659721   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:10.660418   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:13.126618   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:13.126952   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:14.127183   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:16.307414   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:16.307414   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:16.307500   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:18.748902   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:18.749678   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:19.749962   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:21.893323   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:21.893323   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:21.893323   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:24.464351   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:24.465392   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:24.465471   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:26.567626   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:26.567626   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:26.567626   13076 machine.go:93] provisionDockerMachine start ...
	I1014 07:23:26.568400   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:28.681591   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:28.681591   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:28.682618   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:31.178308   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:31.178308   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:31.192309   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:31.207523   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:31.208024   13076 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 07:23:31.337861   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 07:23:31.337861   13076 buildroot.go:166] provisioning hostname "ha-132600-m02"
	I1014 07:23:31.337941   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:33.402839   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:33.402839   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:33.403340   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:35.894312   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:35.894312   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:35.901210   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:35.901699   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:35.901823   13076 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-132600-m02 && echo "ha-132600-m02" | sudo tee /etc/hostname
	I1014 07:23:36.071040   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-132600-m02
	
	I1014 07:23:36.071040   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:38.150707   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:38.151729   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:38.151729   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:40.639242   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:40.639242   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:40.644598   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:40.645419   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:40.645490   13076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-132600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-132600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-132600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 07:23:40.801974   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 07:23:40.801974   13076 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1014 07:23:40.801974   13076 buildroot.go:174] setting up certificates
	I1014 07:23:40.801974   13076 provision.go:84] configureAuth start
	I1014 07:23:40.801974   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:42.869101   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:42.869101   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:42.869101   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:45.382211   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:45.382688   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:45.382688   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:47.501182   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:47.501778   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:47.501832   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:50.008838   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:50.008838   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:50.009193   13076 provision.go:143] copyHostCerts
	I1014 07:23:50.009346   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1014 07:23:50.009346   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1014 07:23:50.009346   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1014 07:23:50.010015   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1014 07:23:50.011395   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1014 07:23:50.011967   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1014 07:23:50.011967   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1014 07:23:50.011967   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1014 07:23:50.013212   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1014 07:23:50.014109   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1014 07:23:50.014109   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1014 07:23:50.014507   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I1014 07:23:50.015649   13076 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-132600-m02 san=[127.0.0.1 172.20.111.83 ha-132600-m02 localhost minikube]
	I1014 07:23:50.130152   13076 provision.go:177] copyRemoteCerts
	I1014 07:23:50.140319   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 07:23:50.140319   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:52.242533   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:52.242533   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:52.243067   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:54.757999   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:54.758132   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:54.758132   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:23:54.867658   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7273333s)
	I1014 07:23:54.867658   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1014 07:23:54.867658   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 07:23:54.915126   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1014 07:23:54.915808   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 07:23:54.962775   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1014 07:23:54.963449   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 07:23:55.012733   13076 provision.go:87] duration metric: took 14.2107408s to configureAuth
	I1014 07:23:55.012733   13076 buildroot.go:189] setting minikube options for container-runtime
	I1014 07:23:55.013734   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:23:55.013734   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:57.174688   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:57.174688   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:57.175736   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:59.668037   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:59.668597   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:59.675074   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:59.675608   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:59.675873   13076 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1014 07:23:59.826626   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1014 07:23:59.826626   13076 buildroot.go:70] root file system type: tmpfs
	I1014 07:23:59.826926   13076 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1014 07:23:59.827038   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:01.963890   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:01.964568   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:01.964568   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:04.515125   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:04.515943   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:04.521824   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:04.522248   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:04.522248   13076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.20.108.120"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1014 07:24:04.691427   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.20.108.120
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1014 07:24:04.692090   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:06.819823   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:06.820339   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:06.820339   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:09.321630   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:09.321630   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:09.326748   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:09.327748   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:09.327808   13076 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1014 07:24:11.561493   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1014 07:24:11.561493   13076 machine.go:96] duration metric: took 44.993809s to provisionDockerMachine
	I1014 07:24:11.561493   13076 client.go:171] duration metric: took 1m54.3893961s to LocalClient.Create
	I1014 07:24:11.561493   13076 start.go:167] duration metric: took 1m54.3893961s to libmachine.API.Create "ha-132600"
	I1014 07:24:11.561493   13076 start.go:293] postStartSetup for "ha-132600-m02" (driver="hyperv")
	I1014 07:24:11.561493   13076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 07:24:11.572490   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 07:24:11.572490   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:13.652544   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:13.653401   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:13.653599   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:16.188638   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:16.188638   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:16.189304   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:24:16.298812   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7262867s)
	I1014 07:24:16.310231   13076 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 07:24:16.316927   13076 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 07:24:16.316927   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1014 07:24:16.317465   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1014 07:24:16.318548   13076 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> 9362.pem in /etc/ssl/certs
	I1014 07:24:16.318548   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /etc/ssl/certs/9362.pem
	I1014 07:24:16.332285   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 07:24:16.351469   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /etc/ssl/certs/9362.pem (1708 bytes)
	I1014 07:24:16.401820   13076 start.go:296] duration metric: took 4.8403207s for postStartSetup
	I1014 07:24:16.404096   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:18.500218   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:18.501234   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:18.501234   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:20.974982   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:20.975125   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:20.975125   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:24:20.978124   13076 start.go:128] duration metric: took 2m3.8086365s to createHost
	I1014 07:24:20.978209   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:23.112937   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:23.113001   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:23.113001   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:25.636675   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:25.637206   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:25.641953   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:25.642670   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:25.642670   13076 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 07:24:25.768945   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728915865.766702127
	
	I1014 07:24:25.769174   13076 fix.go:216] guest clock: 1728915865.766702127
	I1014 07:24:25.769174   13076 fix.go:229] Guest: 2024-10-14 07:24:25.766702127 -0700 PDT Remote: 2024-10-14 07:24:20.978124 -0700 PDT m=+320.730336401 (delta=4.788578127s)
	I1014 07:24:25.769174   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:27.845838   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:27.845838   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:27.845838   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:30.364175   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:30.364263   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:30.369382   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:30.369967   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:30.370049   13076 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1728915865
	I1014 07:24:30.508807   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 14 14:24:25 UTC 2024
	
	I1014 07:24:30.508807   13076 fix.go:236] clock set: Mon Oct 14 14:24:25 UTC 2024
	 (err=<nil>)
	I1014 07:24:30.508807   13076 start.go:83] releasing machines lock for "ha-132600-m02", held for 2m13.3394475s
	I1014 07:24:30.508807   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:32.570516   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:32.570516   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:32.570516   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:35.081338   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:35.081338   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:35.084330   13076 out.go:177] * Found network options:
	I1014 07:24:35.087197   13076 out.go:177]   - NO_PROXY=172.20.108.120
	W1014 07:24:35.089531   13076 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 07:24:35.091879   13076 out.go:177]   - NO_PROXY=172.20.108.120
	W1014 07:24:35.094409   13076 proxy.go:119] fail to check proxy env: Error ip not in block
	W1014 07:24:35.095622   13076 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 07:24:35.098960   13076 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1014 07:24:35.099124   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:35.109488   13076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1014 07:24:35.109488   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:37.293745   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:37.293818   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:37.293870   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:37.344951   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:37.344951   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:37.344951   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:39.933767   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:39.934139   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:39.934445   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:24:39.999458   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:39.999458   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:39.999458   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:24:40.039802   13076 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.930308s)
	W1014 07:24:40.039802   13076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 07:24:40.051889   13076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 07:24:40.057601   13076 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9585844s)
	W1014 07:24:40.057601   13076 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1014 07:24:40.090774   13076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 07:24:40.090774   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:24:40.091315   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:24:40.139757   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1014 07:24:40.172033   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1014 07:24:40.178804   13076 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1014 07:24:40.178804   13076 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1014 07:24:40.194040   13076 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 07:24:40.209169   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 07:24:40.241042   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:24:40.273436   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 07:24:40.304842   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:24:40.336705   13076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 07:24:40.371635   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 07:24:40.404378   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 07:24:40.435805   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 07:24:40.485319   13076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 07:24:40.505473   13076 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 07:24:40.516975   13076 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 07:24:40.550905   13076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 07:24:40.581632   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:24:40.785773   13076 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 07:24:40.819978   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:24:40.830992   13076 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1014 07:24:40.876801   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:24:40.913930   13076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 07:24:40.966444   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:24:41.006432   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:24:41.044531   13076 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 07:24:41.114617   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:24:41.139995   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:24:41.187779   13076 ssh_runner.go:195] Run: which cri-dockerd
	I1014 07:24:41.207367   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1014 07:24:41.226988   13076 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1014 07:24:41.272924   13076 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1014 07:24:41.470204   13076 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1014 07:24:41.650925   13076 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1014 07:24:41.650925   13076 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1014 07:24:41.694417   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:24:41.893295   13076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:25:43.008408   13076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.115034s)
	I1014 07:25:43.020423   13076 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1014 07:25:43.059257   13076 out.go:201] 
	W1014 07:25:43.062124   13076 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 14 14:24:09 ha-132600-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.938489944Z" level=info msg="Starting up"
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.941531731Z" level=info msg="containerd not running, starting managed containerd"
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.942638427Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	Oct 14 14:24:09 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:09.976195986Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.003978070Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004049170Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004119469Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004233069Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004318268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004457268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004664767Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004761167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004782067Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004795467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004887666Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.005331964Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008453752Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008545052Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008673551Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008759151Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008859951Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008988550Z" level=info msg="metadata content store policy set" policy=shared
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034095951Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034335550Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034407650Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034432250Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034449450Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034603849Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034945948Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035129847Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035263947Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035294447Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035312846Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035328546Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035343646Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035419846Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035443346Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035495946Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035526946Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035542746Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035565345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035583545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035598745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035630745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035648245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035663945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035689645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035709545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035731545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035750545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035764245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035777845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035792145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035809345Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035833944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035849444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035864444Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036025444Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036057644Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036074443Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036087843Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036099043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036115343Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036131843Z" level=info msg="NRI interface is disabled by configuration."
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036482842Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036737141Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036799541Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036822041Z" level=info msg="containerd successfully booted in 0.062283s"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.009541514Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.046546477Z" level=info msg="Loading containers: start."
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.205429391Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.426461627Z" level=info msg="Loading containers: done."
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448023414Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448125314Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448200613Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448511212Z" level=info msg="Daemon has completed initialization"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.556153647Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.556338446Z" level=info msg="API listen on [::]:2376"
	Oct 14 14:24:11 ha-132600-m02 systemd[1]: Started Docker Application Container Engine.
	Oct 14 14:24:41 ha-132600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.916201367Z" level=info msg="Processing signal 'terminated'"
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.918697964Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919058063Z" level=info msg="Daemon shutdown complete"
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919215663Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919252063Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: docker.service: Deactivated successfully.
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 14 14:24:42 ha-132600-m02 dockerd[1075]: time="2024-10-14T14:24:42.978494717Z" level=info msg="Starting up"
	Oct 14 14:25:43 ha-132600-m02 dockerd[1075]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1014 07:25:43.062124   13076 out.go:270] * 
	W1014 07:25:43.062827   13076 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:25:43.067027   13076 out.go:201] 
	
	
	==> Docker <==
	Oct 14 14:22:32 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:22:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d7137fb8ec569089599a03e11d18eda509d210d29fa54b5e6a0a8d7dd7a54e7f/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 14:22:32 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:22:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/90ffeb0ce1aafcb639832f7144bd2dbb0b5c9deb53555b49e386b3d3e96d8bc3/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 14:22:32 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:22:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7278ecf7cc9466e1fee2170d699055f6498d2ece8da31c4b3696ed85b3cd38cf/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922383735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922463635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922481235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922580735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.996609949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.996807948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.996907948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.998275745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.069443031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.070022429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.070369527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.076841098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675114485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675249284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675264584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675974482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:17 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:26:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/91751edf004eb5236aa2443a5cff30bdc82ba05d39a3583d9026a4f19faba52f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 14 14:26:19 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:26:19Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.456978196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.457173796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.457616296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.457980197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2b8d44d586369       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   21 minutes ago      Running             busybox                   0                   91751edf004eb       busybox-7dff88458-kr92j
	5a6196684fc6e       c69fa2e9cbf5f                                                                                         25 minutes ago      Running             coredns                   0                   7278ecf7cc946       coredns-7c65d6cfc9-4qfrq
	81d6fdac8115f       6e38f40d628db                                                                                         25 minutes ago      Running             storage-provisioner       0                   90ffeb0ce1aaf       storage-provisioner
	52c3e5370a6c8       c69fa2e9cbf5f                                                                                         25 minutes ago      Running             coredns                   0                   d7137fb8ec569       coredns-7c65d6cfc9-zf6cd
	dae2c0aa67af3       kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387              25 minutes ago      Running             kindnet-cni               0                   277e64b59b8aa       kindnet-rkjqr
	4745a4b0dc379       60c005f310ff3                                                                                         25 minutes ago      Running             kube-proxy                0                   15f605d31df67       kube-proxy-zkbj8
	b08f7a3c2a5a6       ghcr.io/kube-vip/kube-vip@sha256:9e23baad11ae3e69d739430b9fdb60df22356b7da4b4f4e458fae0541619deb4     25 minutes ago      Running             kube-vip                  0                   46cfd7b278d40       kube-vip-ha-132600
	84fd76d493d80       2e96e5913fc06                                                                                         25 minutes ago      Running             etcd                      0                   d71566d00e2f9       etcd-ha-132600
	b661cb6713103       6bab7719df100                                                                                         25 minutes ago      Running             kube-apiserver            0                   cb18be6165830       kube-apiserver-ha-132600
	35c870864a800       9aa1fad941575                                                                                         25 minutes ago      Running             kube-scheduler            0                   36593363b631a       kube-scheduler-ha-132600
	4a8cce31aa3a2       175ffd71cce3d                                                                                         25 minutes ago      Running             kube-controller-manager   0                   f5fa69fe68b07       kube-controller-manager-ha-132600
	
	
	==> coredns [52c3e5370a6c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41885 - 43638 "HINFO IN 5147527986633681541.7511835962423692488. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.048565283s
	[INFO] 10.244.0.4:49801 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0004036s
	[INFO] 10.244.0.4:57121 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.178873632s
	[INFO] 10.244.0.4:54985 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000301s
	[INFO] 10.244.0.4:47603 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000273799s
	[INFO] 10.244.0.4:50030 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000276399s
	[INFO] 10.244.0.4:38124 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000439399s
	[INFO] 10.244.0.4:43764 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000347699s
	[INFO] 10.244.0.4:53583 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0003446s
	[INFO] 10.244.0.4:50771 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0002502s
	[INFO] 10.244.0.4:32805 - 5 "PTR IN 1.96.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0002399s
	
	
	==> coredns [5a6196684fc6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50464 - 22593 "HINFO IN 8090798371432773748.7872157757733337044. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.084180423s
	[INFO] 10.244.0.4:46373 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.004105792s
	[INFO] 10.244.0.4:41175 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.166814894s
	[INFO] 10.244.0.4:43040 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002867s
	[INFO] 10.244.0.4:33549 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.045685805s
	[INFO] 10.244.0.4:60793 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000884598s
	[INFO] 10.244.0.4:57558 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001339s
	[INFO] 10.244.0.4:33138 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.040147217s
	[INFO] 10.244.0.4:46303 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0002673s
	[INFO] 10.244.0.4:60761 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001771s
	[INFO] 10.244.0.4:55567 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000249599s
	
	
	==> describe nodes <==
	Name:               ha-132600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-132600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=ha-132600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_14T07_22_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 14:22:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-132600
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:47:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 14:47:05 +0000   Mon, 14 Oct 2024 14:22:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 14:47:05 +0000   Mon, 14 Oct 2024 14:22:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 14:47:05 +0000   Mon, 14 Oct 2024 14:22:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 14:47:05 +0000   Mon, 14 Oct 2024 14:22:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.108.120
	  Hostname:    ha-132600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 07bcc58a0c7e4f0eaae96253dcf0a4ae
	  System UUID:                e06d00fc-5ebc-1a49-bf31-4c6ce500fe9c
	  Boot ID:                    215d765d-07e3-4bb7-ba3c-58ab142d5afa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kr92j              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 coredns-7c65d6cfc9-4qfrq             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     25m
	  kube-system                 coredns-7c65d6cfc9-zf6cd             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     25m
	  kube-system                 etcd-ha-132600                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         25m
	  kube-system                 kindnet-rkjqr                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      25m
	  kube-system                 kube-apiserver-ha-132600             250m (12%)    0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-controller-manager-ha-132600    200m (10%)    0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-proxy-zkbj8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-scheduler-ha-132600             100m (5%)     0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 kube-vip-ha-132600                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25m   kube-proxy       
	  Normal  Starting                 25m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  25m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  25m   kubelet          Node ha-132600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25m   kubelet          Node ha-132600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25m   kubelet          Node ha-132600 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25m   node-controller  Node ha-132600 event: Registered Node ha-132600 in Controller
	  Normal  NodeReady                25m   kubelet          Node ha-132600 status is now: NodeReady
	
	
	Name:               ha-132600-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-132600-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=ha-132600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_14T07_42_14_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 14:42:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-132600-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:47:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 14:43:14 +0000   Mon, 14 Oct 2024 14:42:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 14:43:14 +0000   Mon, 14 Oct 2024 14:42:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 14:43:14 +0000   Mon, 14 Oct 2024 14:42:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 14:43:14 +0000   Mon, 14 Oct 2024 14:42:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.111.174
	  Hostname:    ha-132600-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 3833a05e76bc4842ac73a1d2223670ec
	  System UUID:                f1d27ade-116b-0642-ac95-727d62870b2a
	  Boot ID:                    07765716-ab52-4f7d-8d78-8fbd996c8cc0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8thz6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kindnet-dznf8              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m27s
	  kube-system                 kube-proxy-q6wxd           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m15s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m28s (x2 over 5m28s)  kubelet          Node ha-132600-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  5m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     5m27s                  cidrAllocator    Node ha-132600-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasNoDiskPressure    5m27s (x2 over 5m28s)  kubelet          Node ha-132600-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m27s (x2 over 5m28s)  kubelet          Node ha-132600-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m24s                  node-controller  Node ha-132600-m03 event: Registered Node ha-132600-m03 in Controller
	  Normal  NodeReady                4m55s                  kubelet          Node ha-132600-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.050916] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +45.956887] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.179434] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[Oct14 14:21] systemd-fstab-generator[997]: Ignoring "noauto" option for root device
	[  +0.095957] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.527367] systemd-fstab-generator[1037]: Ignoring "noauto" option for root device
	[  +0.185829] systemd-fstab-generator[1049]: Ignoring "noauto" option for root device
	[  +0.220100] systemd-fstab-generator[1063]: Ignoring "noauto" option for root device
	[  +2.802946] systemd-fstab-generator[1277]: Ignoring "noauto" option for root device
	[  +0.209648] systemd-fstab-generator[1289]: Ignoring "noauto" option for root device
	[  +0.207153] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.288223] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[ +11.411364] systemd-fstab-generator[1416]: Ignoring "noauto" option for root device
	[  +0.107418] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.752049] systemd-fstab-generator[1674]: Ignoring "noauto" option for root device
	[  +6.264384] systemd-fstab-generator[1823]: Ignoring "noauto" option for root device
	[  +0.102334] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.637673] kauditd_printk_skb: 67 callbacks suppressed
	[Oct14 14:22] systemd-fstab-generator[2317]: Ignoring "noauto" option for root device
	[  +5.137329] kauditd_printk_skb: 17 callbacks suppressed
	[  +8.104433] kauditd_printk_skb: 29 callbacks suppressed
	[Oct14 14:26] kauditd_printk_skb: 28 callbacks suppressed
	[Oct14 14:42] hrtimer: interrupt took 6612884 ns
	
	
	==> etcd [84fd76d493d8] <==
	{"level":"warn","ts":"2024-10-14T14:42:06.838564Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.106689ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T14:42:06.838765Z","caller":"traceutil/trace.go:171","msg":"trace[744257141] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2578; }","duration":"211.373088ms","start":"2024-10-14T14:42:06.627381Z","end":"2024-10-14T14:42:06.838754Z","steps":["trace[744257141] 'agreement among raft nodes before linearized reading'  (duration: 210.908189ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:07.474608Z","caller":"traceutil/trace.go:171","msg":"trace[1611887997] linearizableReadLoop","detail":"{readStateIndex:2842; appliedIndex:2841; }","duration":"173.05738ms","start":"2024-10-14T14:42:07.301532Z","end":"2024-10-14T14:42:07.474589Z","steps":["trace[1611887997] 'read index received'  (duration: 172.879381ms)","trace[1611887997] 'applied index is now lower than readState.Index'  (duration: 177.299µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-14T14:42:07.474817Z","caller":"traceutil/trace.go:171","msg":"trace[1935321534] transaction","detail":"{read_only:false; response_revision:2579; number_of_response:1; }","duration":"268.831248ms","start":"2024-10-14T14:42:07.205940Z","end":"2024-10-14T14:42:07.474771Z","steps":["trace[1935321534] 'process raft request'  (duration: 268.413149ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T14:42:07.475266Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.720679ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T14:42:07.475322Z","caller":"traceutil/trace.go:171","msg":"trace[213807606] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2579; }","duration":"173.789679ms","start":"2024-10-14T14:42:07.301524Z","end":"2024-10-14T14:42:07.475314Z","steps":["trace[213807606] 'agreement among raft nodes before linearized reading'  (duration: 173.704979ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:19.018144Z","caller":"traceutil/trace.go:171","msg":"trace[410087756] transaction","detail":"{read_only:false; response_revision:2632; number_of_response:1; }","duration":"175.067373ms","start":"2024-10-14T14:42:18.843055Z","end":"2024-10-14T14:42:19.018123Z","steps":["trace[410087756] 'process raft request'  (duration: 174.514675ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T14:42:19.375954Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.591721ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-132600-m03\" ","response":"range_response_count:1 size:2962"}
	{"level":"info","ts":"2024-10-14T14:42:19.377451Z","caller":"traceutil/trace.go:171","msg":"trace[946791809] range","detail":"{range_begin:/registry/minions/ha-132600-m03; range_end:; response_count:1; response_revision:2632; }","duration":"239.116617ms","start":"2024-10-14T14:42:19.138318Z","end":"2024-10-14T14:42:19.377435Z","steps":["trace[946791809] 'range keys from in-memory index tree'  (duration: 237.374121ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:25.277317Z","caller":"traceutil/trace.go:171","msg":"trace[682053374] linearizableReadLoop","detail":"{readStateIndex:2917; appliedIndex:2916; }","duration":"140.762656ms","start":"2024-10-14T14:42:25.136515Z","end":"2024-10-14T14:42:25.277278Z","steps":["trace[682053374] 'read index received'  (duration: 140.586757ms)","trace[682053374] 'applied index is now lower than readState.Index'  (duration: 175.299µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-14T14:42:25.277542Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.959056ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-132600-m03\" ","response":"range_response_count:1 size:3114"}
	{"level":"info","ts":"2024-10-14T14:42:25.277634Z","caller":"traceutil/trace.go:171","msg":"trace[820348600] range","detail":"{range_begin:/registry/minions/ha-132600-m03; range_end:; response_count:1; response_revision:2648; }","duration":"141.112155ms","start":"2024-10-14T14:42:25.136511Z","end":"2024-10-14T14:42:25.277623Z","steps":["trace[820348600] 'agreement among raft nodes before linearized reading'  (duration: 140.928056ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:25.279675Z","caller":"traceutil/trace.go:171","msg":"trace[858523099] transaction","detail":"{read_only:false; response_revision:2648; number_of_response:1; }","duration":"166.194393ms","start":"2024-10-14T14:42:25.113464Z","end":"2024-10-14T14:42:25.279658Z","steps":["trace[858523099] 'process raft request'  (duration: 163.709399ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:25.921423Z","caller":"traceutil/trace.go:171","msg":"trace[45131034] transaction","detail":"{read_only:false; response_revision:2649; number_of_response:1; }","duration":"240.489012ms","start":"2024-10-14T14:42:25.680919Z","end":"2024-10-14T14:42:25.921408Z","steps":["trace[45131034] 'process raft request'  (duration: 240.146113ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:31.076106Z","caller":"traceutil/trace.go:171","msg":"trace[1663891298] transaction","detail":"{read_only:false; response_revision:2663; number_of_response:1; }","duration":"120.948703ms","start":"2024-10-14T14:42:30.955138Z","end":"2024-10-14T14:42:31.076086Z","steps":["trace[1663891298] 'process raft request'  (duration: 120.791304ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T14:42:31.287943Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.945532ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-132600-m03\" ","response":"range_response_count:1 size:3114"}
	{"level":"info","ts":"2024-10-14T14:42:31.288066Z","caller":"traceutil/trace.go:171","msg":"trace[1149000350] range","detail":"{range_begin:/registry/minions/ha-132600-m03; range_end:; response_count:1; response_revision:2663; }","duration":"150.051932ms","start":"2024-10-14T14:42:31.137978Z","end":"2024-10-14T14:42:31.288030Z","steps":["trace[1149000350] 'range keys from in-memory index tree'  (duration: 149.753933ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:32.313366Z","caller":"traceutil/trace.go:171","msg":"trace[1148264888] linearizableReadLoop","detail":"{readStateIndex:2939; appliedIndex:2938; }","duration":"175.689169ms","start":"2024-10-14T14:42:32.137533Z","end":"2024-10-14T14:42:32.313222Z","steps":["trace[1148264888] 'read index received'  (duration: 175.27797ms)","trace[1148264888] 'applied index is now lower than readState.Index'  (duration: 410.299µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-14T14:42:32.313766Z","caller":"traceutil/trace.go:171","msg":"trace[53442055] transaction","detail":"{read_only:false; response_revision:2667; number_of_response:1; }","duration":"341.166363ms","start":"2024-10-14T14:42:31.972588Z","end":"2024-10-14T14:42:32.313754Z","steps":["trace[53442055] 'process raft request'  (duration: 340.380665ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T14:42:32.314054Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-14T14:42:31.972576Z","time spent":"341.263763ms","remote":"127.0.0.1:55456","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1094,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:2661 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1021 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-14T14:42:32.314975Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.510264ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-132600-m03\" ","response":"range_response_count:1 size:3114"}
	{"level":"info","ts":"2024-10-14T14:42:32.315174Z","caller":"traceutil/trace.go:171","msg":"trace[597120152] range","detail":"{range_begin:/registry/minions/ha-132600-m03; range_end:; response_count:1; response_revision:2667; }","duration":"177.711264ms","start":"2024-10-14T14:42:32.137452Z","end":"2024-10-14T14:42:32.315163Z","steps":["trace[597120152] 'agreement among raft nodes before linearized reading'  (duration: 177.444264ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:46:56.596123Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2559}
	{"level":"info","ts":"2024-10-14T14:46:56.611577Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2559,"took":"14.540461ms","hash":3178179299,"current-db-size-bytes":2416640,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2015232,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-10-14T14:46:56.611712Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3178179299,"revision":2559,"compact-revision":2024}
	
	
	==> kernel <==
	 14:47:41 up 27 min,  0 users,  load average: 0.88, 0.48, 0.38
	Linux ha-132600 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [dae2c0aa67af] <==
	I1014 14:46:37.565175       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:46:47.568973       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:46:47.569551       1 main.go:300] handling current node
	I1014 14:46:47.569591       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:46:47.569602       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:46:57.563192       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:46:57.563436       1 main.go:300] handling current node
	I1014 14:46:57.563457       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:46:57.563468       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:47:07.563333       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:47:07.563468       1 main.go:300] handling current node
	I1014 14:47:07.563649       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:47:07.563747       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:47:17.563730       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:47:17.564343       1 main.go:300] handling current node
	I1014 14:47:17.564554       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:47:17.564614       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:47:27.565117       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:47:27.565427       1 main.go:300] handling current node
	I1014 14:47:27.565450       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:47:27.565459       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:47:37.565821       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:47:37.566149       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:47:37.566994       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:47:37.567033       1 main.go:300] handling current node
	
	
	==> kube-apiserver [b661cb671310] <==
	I1014 14:21:58.925388       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1014 14:21:58.925426       1 policy_source.go:224] refreshing policies
	I1014 14:21:58.928296       1 controller.go:615] quota admission added evaluator for: namespaces
	E1014 14:21:58.950810       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1014 14:21:59.174958       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 14:21:59.819127       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1014 14:21:59.829797       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1014 14:21:59.829835       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 14:22:00.942976       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 14:22:01.015766       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 14:22:01.150423       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1014 14:22:01.164901       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.20.108.120]
	I1014 14:22:01.165818       1 controller.go:615] quota admission added evaluator for: endpoints
	I1014 14:22:01.175247       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 14:22:01.835583       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1014 14:22:03.554010       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1014 14:22:03.589407       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1014 14:22:03.617368       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1014 14:22:07.346752       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1014 14:22:07.516653       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1014 14:38:03.250866       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57382: use of closed network connection
	E1014 14:38:04.469699       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57390: use of closed network connection
	E1014 14:38:05.596658       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57398: use of closed network connection
	E1014 14:38:40.228042       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57419: use of closed network connection
	E1014 14:38:50.673483       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57421: use of closed network connection
	
	
	==> kube-controller-manager [4a8cce31aa3a] <==
	I1014 14:42:14.020113       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-132600-m03\" does not exist"
	I1014 14:42:14.099308       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-132600-m03" podCIDRs=["10.244.1.0/24"]
	I1014 14:42:14.099376       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	E1014 14:42:14.131350       1 range_allocator.go:427] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"ha-132600-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.2.0/24\", \"10.244.1.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-132600-m03" podCIDRs=["10.244.2.0/24"]
	E1014 14:42:14.131490       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"ha-132600-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.2.0/24\", \"10.244.1.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-132600-m03"
	E1014 14:42:14.131543       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'ha-132600-m03': failed to patch node CIDR: Node \"ha-132600-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.2.0/24\", \"10.244.1.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I1014 14:42:14.132334       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:14.137556       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:14.252734       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:14.858650       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:17.081634       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-132600-m03"
	I1014 14:42:17.154621       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:24.267607       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:44.660569       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:46.187605       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-132600-m03"
	I1014 14:42:46.189657       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:46.206592       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:46.221387       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51µs"
	I1014 14:42:46.232286       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51.3µs"
	I1014 14:42:46.252360       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="196µs"
	I1014 14:42:47.108442       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:49.071073       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="17.613156ms"
	I1014 14:42:49.071582       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.8µs"
	I1014 14:43:14.925968       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:47:05.189799       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600"
	
	
	==> kube-proxy [4745a4b0dc37] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1014 14:22:09.024513       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1014 14:22:09.047174       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.20.108.120"]
	E1014 14:22:09.047308       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 14:22:09.124346       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 14:22:09.124494       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 14:22:09.124529       1 server_linux.go:169] "Using iptables Proxier"
	I1014 14:22:09.128809       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 14:22:09.129721       1 server.go:483] "Version info" version="v1.31.1"
	I1014 14:22:09.129805       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 14:22:09.131652       1 config.go:199] "Starting service config controller"
	I1014 14:22:09.131988       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 14:22:09.132269       1 config.go:105] "Starting endpoint slice config controller"
	I1014 14:22:09.132619       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 14:22:09.133592       1 config.go:328] "Starting node config controller"
	I1014 14:22:09.135184       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 14:22:09.232621       1 shared_informer.go:320] Caches are synced for service config
	I1014 14:22:09.233933       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 14:22:09.236054       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [35c870864a80] <==
	W1014 14:21:59.944424       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1014 14:21:59.944452       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:21:59.979078       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1014 14:21:59.979365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.111250       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1014 14:22:00.111664       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.192015       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1014 14:22:00.192346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.196984       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1014 14:22:00.197112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.197208       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1014 14:22:00.197241       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.212172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1014 14:22:00.212214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.260144       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1014 14:22:00.261002       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.276235       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1014 14:22:00.276419       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.295031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1014 14:22:00.295892       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.311664       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1014 14:22:00.313212       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1014 14:22:00.354351       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1014 14:22:00.355204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 14:22:02.117335       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 14 14:43:03 ha-132600 kubelet[2324]: E1014 14:43:03.675510    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:43:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:43:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:43:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:43:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:44:03 ha-132600 kubelet[2324]: E1014 14:44:03.676935    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:44:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:44:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:44:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:44:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:45:03 ha-132600 kubelet[2324]: E1014 14:45:03.682657    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:45:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:45:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:45:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:45:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:46:03 ha-132600 kubelet[2324]: E1014 14:46:03.687832    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:46:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:46:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:46:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:46:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:47:03 ha-132600 kubelet[2324]: E1014 14:47:03.679016    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:47:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:47:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:47:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:47:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-132600 -n ha-132600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-132600 -n ha-132600: (11.6779688s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-132600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-rng7p
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-132600 describe pod busybox-7dff88458-rng7p
helpers_test.go:282: (dbg) kubectl --context ha-132600 describe pod busybox-7dff88458-rng7p:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-rng7p
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9hpj2 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-9hpj2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                   From               Message
	  ----     ------            ----                  ----               -------
	  Warning  FailedScheduling  6m21s (x4 over 21m)   default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  4m58s (x2 over 5m8s)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (102.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (57.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (25.2452498s)
ha_test.go:415: expected profile "ha-132600" in json of 'profile list' to have "Degraded" status but have "OK" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-132600\",\"Status\":\"OK\",\"Config\":{\"Name\":\"ha-132600\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperv\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIS
erverPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-132600\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"172.20.111.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"172.20.108.120\",\"Port\":8443,\"Kubernetes
Version\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"172.20.111.83\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"172.20.111.174\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plug
in\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"C:\\\\Users\\\\jenkins.minikube1:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",
\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-windows-amd64.exe profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-132600 -n ha-132600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-132600 -n ha-132600: (11.6841292s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-132600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-132600 logs -n 25: (8.066637s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:37 PDT | 14 Oct 24 07:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:37 PDT | 14 Oct 24 07:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-8thz6 --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | busybox-7dff88458-kr92j --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-rng7p --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-8thz6 --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | busybox-7dff88458-kr92j --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-rng7p --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-8thz6 -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | busybox-7dff88458-kr92j -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-rng7p -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-8thz6              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | busybox-7dff88458-kr92j              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-kr92j -- sh        |           |                   |         |                     |                     |
	|         | -c ping -c 1 172.20.96.1             |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-rng7p              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| node    | add -p ha-132600 -v=7                | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:39 PDT | 14 Oct 24 07:42 PDT |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	| node    | ha-132600 node stop m02 -v=7         | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:46 PDT | 14 Oct 24 07:46 PDT |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 07:19:00
	Running on machine: minikube1
	Binary: Built with gc go1.23.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 07:19:00.342865   13076 out.go:345] Setting OutFile to fd 1228 ...
	I1014 07:19:00.344546   13076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:19:00.344546   13076 out.go:358] Setting ErrFile to fd 824...
	I1014 07:19:00.344546   13076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:19:00.368247   13076 out.go:352] Setting JSON to false
	I1014 07:19:00.372277   13076 start.go:129] hostinfo: {"hostname":"minikube1","uptime":101054,"bootTime":1728814485,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5011 Build 19045.5011","kernelVersion":"10.0.19045.5011 Build 19045.5011","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1014 07:19:00.372803   13076 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:19:00.379728   13076 out.go:177] * [ha-132600] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	I1014 07:19:00.386820   13076 notify.go:220] Checking for updates...
	I1014 07:19:00.389571   13076 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:19:00.392679   13076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:19:00.395167   13076 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1014 07:19:00.398285   13076 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:19:00.400520   13076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:19:00.404118   13076 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:19:05.654471   13076 out.go:177] * Using the hyperv driver based on user configuration
	I1014 07:19:05.658285   13076 start.go:297] selected driver: hyperv
	I1014 07:19:05.658285   13076 start.go:901] validating driver "hyperv" against <nil>
	I1014 07:19:05.658285   13076 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:19:05.705211   13076 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 07:19:05.706541   13076 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:19:05.706541   13076 cni.go:84] Creating CNI manager for ""
	I1014 07:19:05.706541   13076 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1014 07:19:05.706541   13076 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 07:19:05.707066   13076 start.go:340] cluster config:
	{Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:19:05.707207   13076 iso.go:125] acquiring lock: {Name:mkb71635bcbccba29dce9048ad4cc71430a7e577 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:19:05.711264   13076 out.go:177] * Starting "ha-132600" primary control-plane node in "ha-132600" cluster
	I1014 07:19:05.715097   13076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:19:05.715262   13076 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1014 07:19:05.715344   13076 cache.go:56] Caching tarball of preloaded images
	I1014 07:19:05.715452   13076 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1014 07:19:05.715452   13076 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:19:05.716553   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:19:05.716898   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json: {Name:mkbd11bf3f90adebf1f4630d2e8deaac328748d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:19:05.717886   13076 start.go:360] acquireMachinesLock for ha-132600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:19:05.717886   13076 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-132600"
	I1014 07:19:05.718562   13076 start.go:93] Provisioning new machine with config: &{Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:19:05.718562   13076 start.go:125] createHost starting for "" (driver="hyperv")
	I1014 07:19:05.721891   13076 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:19:05.722555   13076 start.go:159] libmachine.API.Create for "ha-132600" (driver="hyperv")
	I1014 07:19:05.722555   13076 client.go:168] LocalClient.Create starting
	I1014 07:19:05.722555   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I1014 07:19:05.723316   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:19:05.723316   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:19:05.723316   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I1014 07:19:05.723877   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:19:05.723955   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:19:05.723955   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1014 07:19:07.752927   13076 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1014 07:19:07.753576   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:07.753793   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1014 07:19:09.445838   13076 main.go:141] libmachine: [stdout =====>] : False
	
	I1014 07:19:09.445838   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:09.446315   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:19:10.901034   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:19:10.901034   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:10.901034   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:19:14.320078   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:19:14.320309   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:14.323253   13076 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 07:19:14.829443   13076 main.go:141] libmachine: Creating SSH key...
	I1014 07:19:14.988345   13076 main.go:141] libmachine: Creating VM...
	I1014 07:19:14.988345   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:19:17.790932   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:19:17.791005   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:17.791145   13076 main.go:141] libmachine: Using switch "Default Switch"
	I1014 07:19:17.791145   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:19:19.529861   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:19:19.529861   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:19.529861   13076 main.go:141] libmachine: Creating VHD
	I1014 07:19:19.530436   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\fixed.vhd' -SizeBytes 10MB -Fixed
	I1014 07:19:23.105904   13076 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 267BC11D-A195-4636-8AE9-EF1D7966C95C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1014 07:19:23.105987   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:23.106037   13076 main.go:141] libmachine: Writing magic tar header
	I1014 07:19:23.106037   13076 main.go:141] libmachine: Writing SSH key tar header
	I1014 07:19:23.118040   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\disk.vhd' -VHDType Dynamic -DeleteSource
	I1014 07:19:26.201609   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:26.201950   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:26.202023   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\disk.vhd' -SizeBytes 20000MB
	I1014 07:19:28.768368   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:28.768789   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:28.768969   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-132600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1014 07:19:32.226690   13076 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-132600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1014 07:19:32.226915   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:32.226915   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-132600 -DynamicMemoryEnabled $false
	I1014 07:19:34.390731   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:34.390837   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:34.391049   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-132600 -Count 2
	I1014 07:19:36.449382   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:36.449628   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:36.449628   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-132600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\boot2docker.iso'
	I1014 07:19:38.932909   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:38.932909   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:38.932909   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-132600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\disk.vhd'
	I1014 07:19:41.509229   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:41.509229   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:41.509229   13076 main.go:141] libmachine: Starting VM...
	I1014 07:19:41.509229   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-132600
	I1014 07:19:44.718680   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:44.718680   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:44.719475   13076 main.go:141] libmachine: Waiting for host to start...
	I1014 07:19:44.719770   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:19:46.954823   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:19:46.954823   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:46.955829   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:19:49.408227   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:49.408227   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:50.409369   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:19:52.561607   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:19:52.561912   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:52.561912   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:19:55.081136   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:55.081187   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:56.082401   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:19:58.271497   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:19:58.271497   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:58.272384   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:00.740863   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:20:00.741070   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:01.741536   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:03.902654   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:03.902721   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:03.902721   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:06.378064   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:20:06.378064   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:07.379104   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:09.537329   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:09.537329   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:09.537541   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:12.074320   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:12.074320   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:12.074834   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:14.171613   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:14.171613   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:14.171613   13076 machine.go:93] provisionDockerMachine start ...
	I1014 07:20:14.172377   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:16.225265   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:16.225265   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:16.225634   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:18.692061   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:18.692061   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:18.698261   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:18.713495   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:18.713568   13076 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 07:20:18.839311   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 07:20:18.839517   13076 buildroot.go:166] provisioning hostname "ha-132600"
	I1014 07:20:18.839517   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:20.911254   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:20.911424   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:20.911601   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:23.370138   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:23.370756   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:23.377008   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:23.377773   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:23.377773   13076 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-132600 && echo "ha-132600" | sudo tee /etc/hostname
	I1014 07:20:23.537548   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-132600
	
	I1014 07:20:23.537548   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:25.571429   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:25.571429   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:25.572326   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:28.068677   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:28.068677   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:28.077026   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:28.078615   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:28.078615   13076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-132600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-132600/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-132600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 07:20:28.216809   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 07:20:28.216874   13076 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1014 07:20:28.216990   13076 buildroot.go:174] setting up certificates
	I1014 07:20:28.217016   13076 provision.go:84] configureAuth start
	I1014 07:20:28.217047   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:30.246758   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:30.247093   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:30.247368   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:32.688428   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:32.688428   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:32.688428   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:34.748908   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:34.748908   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:34.749349   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:37.221628   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:37.221798   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:37.221798   13076 provision.go:143] copyHostCerts
	I1014 07:20:37.221998   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1014 07:20:37.222444   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1014 07:20:37.222558   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1014 07:20:37.223110   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1014 07:20:37.224109   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1014 07:20:37.224599   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1014 07:20:37.224599   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1014 07:20:37.224959   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I1014 07:20:37.225332   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1014 07:20:37.225332   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1014 07:20:37.225332   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1014 07:20:37.226765   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1014 07:20:37.227733   13076 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-132600 san=[127.0.0.1 172.20.108.120 ha-132600 localhost minikube]
	I1014 07:20:37.731674   13076 provision.go:177] copyRemoteCerts
	I1014 07:20:37.744706   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 07:20:37.744706   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:39.801309   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:39.801309   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:39.801309   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:42.273306   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:42.273367   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:42.273367   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:20:42.378794   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6340227s)
	I1014 07:20:42.378847   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1014 07:20:42.378847   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I1014 07:20:42.424772   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1014 07:20:42.425289   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 07:20:42.471397   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1014 07:20:42.471964   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 07:20:42.527104   13076 provision.go:87] duration metric: took 14.3100395s to configureAuth
	I1014 07:20:42.527104   13076 buildroot.go:189] setting minikube options for container-runtime
	I1014 07:20:42.527712   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:20:42.528252   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:44.598308   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:44.598308   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:44.599305   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:47.040424   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:47.040637   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:47.045726   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:47.046398   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:47.046398   13076 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1014 07:20:47.167710   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1014 07:20:47.167801   13076 buildroot.go:70] root file system type: tmpfs
	I1014 07:20:47.168208   13076 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1014 07:20:47.168315   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:49.202111   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:49.202331   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:49.202424   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:51.648278   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:51.648278   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:51.654248   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:51.655052   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:51.655052   13076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1014 07:20:51.814048   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1014 07:20:51.814229   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:53.871643   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:53.871643   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:53.872030   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:56.286054   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:56.286054   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:56.292357   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:56.293125   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:56.293125   13076 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1014 07:20:58.458028   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1014 07:20:58.458028   13076 machine.go:96] duration metric: took 44.286361s to provisionDockerMachine
	I1014 07:20:58.458028   13076 client.go:171] duration metric: took 1m52.7353371s to LocalClient.Create
	I1014 07:20:58.458028   13076 start.go:167] duration metric: took 1m52.7353371s to libmachine.API.Create "ha-132600"
	I1014 07:20:58.458028   13076 start.go:293] postStartSetup for "ha-132600" (driver="hyperv")
	I1014 07:20:58.458028   13076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 07:20:58.471262   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 07:20:58.471262   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:00.512590   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:00.513162   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:00.513321   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:02.924795   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:02.924929   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:02.925163   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:21:03.032582   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5613141s)
	I1014 07:21:03.044373   13076 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 07:21:03.050831   13076 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 07:21:03.050831   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1014 07:21:03.051271   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1014 07:21:03.052198   13076 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> 9362.pem in /etc/ssl/certs
	I1014 07:21:03.052295   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /etc/ssl/certs/9362.pem
	I1014 07:21:03.063873   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 07:21:03.081968   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /etc/ssl/certs/9362.pem (1708 bytes)
	I1014 07:21:03.128637   13076 start.go:296] duration metric: took 4.6706031s for postStartSetup
	I1014 07:21:03.132031   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:05.196102   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:05.196622   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:05.196696   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:07.624940   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:07.625781   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:07.626103   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:21:07.629046   13076 start.go:128] duration metric: took 2m1.9103356s to createHost
	I1014 07:21:07.629046   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:09.687162   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:09.687420   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:09.687490   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:12.104556   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:12.104556   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:12.110165   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:21:12.110572   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:21:12.110572   13076 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 07:21:12.241524   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728915672.238854784
	
	I1014 07:21:12.241655   13076 fix.go:216] guest clock: 1728915672.238854784
	I1014 07:21:12.241726   13076 fix.go:229] Guest: 2024-10-14 07:21:12.238854784 -0700 PDT Remote: 2024-10-14 07:21:07.6290467 -0700 PDT m=+127.381500801 (delta=4.609808084s)
	I1014 07:21:12.241841   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:14.299338   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:14.299599   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:14.299599   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:16.767477   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:16.767665   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:16.775565   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:21:16.775565   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:21:16.775565   13076 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1728915672
	I1014 07:21:16.921038   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 14 14:21:12 UTC 2024
	
	I1014 07:21:16.921096   13076 fix.go:236] clock set: Mon Oct 14 14:21:12 UTC 2024
	 (err=<nil>)
	I1014 07:21:16.921169   13076 start.go:83] releasing machines lock for "ha-132600", held for 2m11.2030498s
	I1014 07:21:16.921319   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:18.988904   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:18.989898   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:18.989947   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:21.410167   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:21.410403   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:21.414482   13076 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1014 07:21:21.414683   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:21.423844   13076 ssh_runner.go:195] Run: cat /version.json
	I1014 07:21:21.423844   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:23.554001   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:23.554206   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:23.554362   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:23.565882   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:23.565882   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:23.565882   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:26.092249   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:26.092249   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:26.092379   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:21:26.119019   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:26.119086   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:26.119215   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:21:26.179573   13076 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.764971s)
	W1014 07:21:26.179670   13076 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1014 07:21:26.212879   13076 ssh_runner.go:235] Completed: cat /version.json: (4.7890281s)
	I1014 07:21:26.223815   13076 ssh_runner.go:195] Run: systemctl --version
	I1014 07:21:26.243251   13076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 07:21:26.251321   13076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 07:21:26.262431   13076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 07:21:26.290027   13076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 07:21:26.290027   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:21:26.290027   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1014 07:21:26.296743   13076 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1014 07:21:26.296743   13076 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1014 07:21:26.340326   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1014 07:21:26.370362   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1014 07:21:26.391878   13076 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 07:21:26.402847   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 07:21:26.433241   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:21:26.462619   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 07:21:26.491735   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:21:26.520404   13076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 07:21:26.550615   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 07:21:26.580278   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 07:21:26.608564   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 07:21:26.636849   13076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 07:21:26.656448   13076 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 07:21:26.667504   13076 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 07:21:26.700184   13076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 07:21:26.726834   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:26.916163   13076 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 07:21:26.953200   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:21:26.967627   13076 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1014 07:21:27.005080   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:21:27.039193   13076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 07:21:27.083701   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:21:27.118670   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:21:27.153508   13076 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 07:21:27.209689   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:21:27.230765   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:21:27.273213   13076 ssh_runner.go:195] Run: which cri-dockerd
	I1014 07:21:27.291783   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1014 07:21:27.308581   13076 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1014 07:21:27.351709   13076 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1014 07:21:27.539571   13076 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1014 07:21:27.725567   13076 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1014 07:21:27.725567   13076 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1014 07:21:27.768723   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:27.951142   13076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:21:30.486027   13076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5347969s)
	I1014 07:21:30.497722   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1014 07:21:30.531680   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 07:21:30.563742   13076 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1014 07:21:30.767218   13076 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1014 07:21:30.967093   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:31.191192   13076 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1014 07:21:31.229757   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 07:21:31.267426   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:31.462408   13076 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1014 07:21:31.569509   13076 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1014 07:21:31.581151   13076 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1014 07:21:31.591322   13076 start.go:563] Will wait 60s for crictl version
	I1014 07:21:31.604845   13076 ssh_runner.go:195] Run: which crictl
	I1014 07:21:31.622874   13076 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 07:21:31.680621   13076 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1014 07:21:31.689423   13076 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 07:21:31.732594   13076 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 07:21:31.766375   13076 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1014 07:21:31.766375   13076 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:08:65:4d Flags:up|broadcast|multicast|running}
	I1014 07:21:31.773955   13076 ip.go:214] interface addr: fe80::b548:ff79:d464:75b8/64
	I1014 07:21:31.773955   13076 ip.go:214] interface addr: 172.20.96.1/20
	I1014 07:21:31.784171   13076 ssh_runner.go:195] Run: grep 172.20.96.1	host.minikube.internal$ /etc/hosts
	I1014 07:21:31.791192   13076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 07:21:31.825649   13076 kubeadm.go:883] updating cluster {Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 07:21:31.825649   13076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:21:31.834667   13076 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 07:21:31.867707   13076 docker.go:689] Got preloaded images: 
	I1014 07:21:31.867707   13076 docker.go:695] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I1014 07:21:31.880238   13076 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1014 07:21:31.908893   13076 ssh_runner.go:195] Run: which lz4
	I1014 07:21:31.914374   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1014 07:21:31.924715   13076 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 07:21:31.930846   13076 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 07:21:31.931075   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I1014 07:21:33.829320   13076 docker.go:653] duration metric: took 1.9148657s to copy over tarball
	I1014 07:21:33.840382   13076 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 07:21:42.539043   13076 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6986506s)
	I1014 07:21:42.539174   13076 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 07:21:42.611065   13076 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1014 07:21:42.636639   13076 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I1014 07:21:42.681041   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:42.879066   13076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:21:46.161713   13076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.2824935s)
	I1014 07:21:46.173131   13076 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 07:21:46.198645   13076 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1014 07:21:46.198645   13076 cache_images.go:84] Images are preloaded, skipping loading
	I1014 07:21:46.198645   13076 kubeadm.go:934] updating node { 172.20.108.120 8443 v1.31.1 docker true true} ...
	I1014 07:21:46.198645   13076 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-132600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.108.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 07:21:46.209892   13076 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1014 07:21:46.283733   13076 cni.go:84] Creating CNI manager for ""
	I1014 07:21:46.283861   13076 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 07:21:46.283932   13076 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 07:21:46.284000   13076 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.108.120 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-132600 NodeName:ha-132600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.108.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.108.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 07:21:46.284290   13076 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.108.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-132600"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.20.108.120"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.108.120"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 07:21:46.284367   13076 kube-vip.go:115] generating kube-vip config ...
	I1014 07:21:46.295693   13076 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1014 07:21:46.324017   13076 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1014 07:21:46.324230   13076 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.20.111.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1014 07:21:46.334184   13076 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 07:21:46.349321   13076 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 07:21:46.360771   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 07:21:46.379224   13076 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I1014 07:21:46.412080   13076 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 07:21:46.441691   13076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1014 07:21:46.479025   13076 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1014 07:21:46.519461   13076 ssh_runner.go:195] Run: grep 172.20.111.254	control-plane.minikube.internal$ /etc/hosts
	I1014 07:21:46.525021   13076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.111.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 07:21:46.554943   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:46.730466   13076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 07:21:46.761153   13076 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600 for IP: 172.20.108.120
	I1014 07:21:46.761153   13076 certs.go:194] generating shared ca certs ...
	I1014 07:21:46.761153   13076 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:46.761825   13076 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I1014 07:21:46.762680   13076 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I1014 07:21:46.762846   13076 certs.go:256] generating profile certs ...
	I1014 07:21:46.763605   13076 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.key
	I1014 07:21:46.763605   13076 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.crt with IP's: []
	I1014 07:21:46.955865   13076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.crt ...
	I1014 07:21:46.955865   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.crt: {Name:mk4a5e51a4b9d62279c7ef4ef47716bcdf88d704 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:46.957857   13076 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.key ...
	I1014 07:21:46.957857   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.key: {Name:mkc80e99cfe4d8d17198a87fb188cfa1a72d727d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:46.958881   13076 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61
	I1014 07:21:46.958881   13076 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.108.120 172.20.111.254]
	I1014 07:21:47.375660   13076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61 ...
	I1014 07:21:47.375660   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61: {Name:mke858ab0ac103663123708f9814cfdffe03211e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.377216   13076 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61 ...
	I1014 07:21:47.377216   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61: {Name:mk972fca892a193d6b4e84d42e27ed86ce9f0457 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.377811   13076 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt
	I1014 07:21:47.391806   13076 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key
	I1014 07:21:47.392823   13076 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key
	I1014 07:21:47.392823   13076 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt with IP's: []
	I1014 07:21:47.675751   13076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt ...
	I1014 07:21:47.675751   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt: {Name:mk5a9f1239213dd0a0f36b4ee41d5ac56cbd79c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.677918   13076 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key ...
	I1014 07:21:47.677918   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key: {Name:mkb931094180fef1516b0d43acb6430ecbcbb3fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 07:21:47.679943   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 07:21:47.679943   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 07:21:47.679943   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 07:21:47.691415   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 07:21:47.693468   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem (1338 bytes)
	W1014 07:21:47.693468   13076 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936_empty.pem, impossibly tiny 0 bytes
	I1014 07:21:47.694004   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1014 07:21:47.694400   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1014 07:21:47.694671   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1014 07:21:47.695091   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1014 07:21:47.695960   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem (1708 bytes)
	I1014 07:21:47.696145   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem -> /usr/share/ca-certificates/936.pem
	I1014 07:21:47.696401   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /usr/share/ca-certificates/9362.pem
	I1014 07:21:47.696580   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:47.697580   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 07:21:47.742705   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 07:21:47.790988   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 07:21:47.837975   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 07:21:47.884872   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 07:21:47.936878   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 07:21:47.985330   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 07:21:48.029181   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 07:21:48.078824   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem --> /usr/share/ca-certificates/936.pem (1338 bytes)
	I1014 07:21:48.127907   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /usr/share/ca-certificates/9362.pem (1708 bytes)
	I1014 07:21:48.174542   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 07:21:48.218635   13076 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 07:21:48.264009   13076 ssh_runner.go:195] Run: openssl version
	I1014 07:21:48.282939   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 07:21:48.310256   13076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:48.318050   13076 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:44 /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:48.328768   13076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:48.348090   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 07:21:48.376484   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936.pem && ln -fs /usr/share/ca-certificates/936.pem /etc/ssl/certs/936.pem"
	I1014 07:21:48.408494   13076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936.pem
	I1014 07:21:48.416513   13076 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 14:00 /usr/share/ca-certificates/936.pem
	I1014 07:21:48.427815   13076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936.pem
	I1014 07:21:48.447033   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/936.pem /etc/ssl/certs/51391683.0"
	I1014 07:21:48.474293   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9362.pem && ln -fs /usr/share/ca-certificates/9362.pem /etc/ssl/certs/9362.pem"
	I1014 07:21:48.502293   13076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9362.pem
	I1014 07:21:48.509688   13076 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 14:00 /usr/share/ca-certificates/9362.pem
	I1014 07:21:48.519811   13076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9362.pem
	I1014 07:21:48.540403   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9362.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 07:21:48.569007   13076 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 07:21:48.575758   13076 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 07:21:48.576118   13076 kubeadm.go:392] StartCluster: {Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clu
sterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:21:48.584978   13076 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1014 07:21:48.619430   13076 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 07:21:48.647606   13076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 07:21:48.676291   13076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 07:21:48.693679   13076 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 07:21:48.693679   13076 kubeadm.go:157] found existing configuration files:
	
	I1014 07:21:48.703613   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 07:21:48.722067   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 07:21:48.732618   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 07:21:48.759836   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 07:21:48.777622   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 07:21:48.789748   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 07:21:48.818512   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 07:21:48.837396   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 07:21:48.848696   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 07:21:48.876926   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 07:21:48.893530   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 07:21:48.904372   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 07:21:48.920210   13076 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 07:21:49.392185   13076 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 07:22:04.140256   13076 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 07:22:04.140412   13076 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 07:22:04.140501   13076 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 07:22:04.140848   13076 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 07:22:04.141206   13076 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 07:22:04.141311   13076 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 07:22:04.144180   13076 out.go:235]   - Generating certificates and keys ...
	I1014 07:22:04.144373   13076 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 07:22:04.144575   13076 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 07:22:04.144939   13076 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 07:22:04.145172   13076 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1014 07:22:04.145315   13076 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1014 07:22:04.145521   13076 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1014 07:22:04.145845   13076 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1014 07:22:04.146212   13076 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-132600 localhost] and IPs [172.20.108.120 127.0.0.1 ::1]
	I1014 07:22:04.146365   13076 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1014 07:22:04.146614   13076 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-132600 localhost] and IPs [172.20.108.120 127.0.0.1 ::1]
	I1014 07:22:04.147003   13076 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 07:22:04.147200   13076 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 07:22:04.147236   13076 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1014 07:22:04.147236   13076 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 07:22:04.147236   13076 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 07:22:04.147236   13076 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 07:22:04.147954   13076 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 07:22:04.147987   13076 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 07:22:04.147987   13076 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 07:22:04.147987   13076 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 07:22:04.148785   13076 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 07:22:04.151235   13076 out.go:235]   - Booting up control plane ...
	I1014 07:22:04.151235   13076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 07:22:04.151819   13076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 07:22:04.152046   13076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 07:22:04.152333   13076 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 07:22:04.153426   13076 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 523.263843ms
	I1014 07:22:04.153534   13076 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 07:22:04.153534   13076 kubeadm.go:310] [api-check] The API server is healthy after 9.176461865s
	I1014 07:22:04.153534   13076 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 07:22:04.154243   13076 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 07:22:04.154243   13076 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 07:22:04.154779   13076 kubeadm.go:310] [mark-control-plane] Marking the node ha-132600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 07:22:04.154779   13076 kubeadm.go:310] [bootstrap-token] Using token: rm3d8a.inqsemtt5b5l5x2e
	I1014 07:22:04.157926   13076 out.go:235]   - Configuring RBAC rules ...
	I1014 07:22:04.158760   13076 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 07:22:04.158928   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 07:22:04.158928   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 07:22:04.159656   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 07:22:04.159851   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 07:22:04.160089   13076 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 07:22:04.160270   13076 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 07:22:04.160529   13076 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 07:22:04.160529   13076 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 07:22:04.160529   13076 kubeadm.go:310] 
	I1014 07:22:04.160529   13076 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 07:22:04.160529   13076 kubeadm.go:310] 
	I1014 07:22:04.160529   13076 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 07:22:04.160529   13076 kubeadm.go:310] 
	I1014 07:22:04.161100   13076 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 07:22:04.161186   13076 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 07:22:04.161352   13076 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 07:22:04.161352   13076 kubeadm.go:310] 
	I1014 07:22:04.161534   13076 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 07:22:04.161534   13076 kubeadm.go:310] 
	I1014 07:22:04.161706   13076 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 07:22:04.161706   13076 kubeadm.go:310] 
	I1014 07:22:04.161950   13076 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 07:22:04.161950   13076 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 07:22:04.161950   13076 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 07:22:04.161950   13076 kubeadm.go:310] 
	I1014 07:22:04.162515   13076 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 07:22:04.162607   13076 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 07:22:04.162607   13076 kubeadm.go:310] 
	I1014 07:22:04.162607   13076 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rm3d8a.inqsemtt5b5l5x2e \
	I1014 07:22:04.162607   13076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f876f951efbe7f1ed47ca578ee0d33e6ce8fe69c1a008a8154ee00f56ac84e46 \
	I1014 07:22:04.163162   13076 kubeadm.go:310] 	--control-plane 
	I1014 07:22:04.163261   13076 kubeadm.go:310] 
	I1014 07:22:04.163340   13076 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 07:22:04.163340   13076 kubeadm.go:310] 
	I1014 07:22:04.163570   13076 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rm3d8a.inqsemtt5b5l5x2e \
	I1014 07:22:04.163570   13076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f876f951efbe7f1ed47ca578ee0d33e6ce8fe69c1a008a8154ee00f56ac84e46 
	I1014 07:22:04.163570   13076 cni.go:84] Creating CNI manager for ""
	I1014 07:22:04.163570   13076 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 07:22:04.167124   13076 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1014 07:22:04.179134   13076 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 07:22:04.187701   13076 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1014 07:22:04.187701   13076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1014 07:22:04.233053   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1014 07:22:04.912894   13076 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 07:22:04.925574   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-132600 minikube.k8s.io/updated_at=2024_10_14T07_22_04_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=ha-132600 minikube.k8s.io/primary=true
	I1014 07:22:04.927795   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:04.958241   13076 ops.go:34] apiserver oom_adj: -16
	I1014 07:22:05.168034   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:05.667048   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:06.166157   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:06.667561   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:07.167283   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:07.305312   13076 kubeadm.go:1113] duration metric: took 2.3924152s to wait for elevateKubeSystemPrivileges
	I1014 07:22:07.306284   13076 kubeadm.go:394] duration metric: took 18.730142s to StartCluster
	I1014 07:22:07.306284   13076 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:22:07.306284   13076 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:22:07.308077   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:22:07.309777   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 07:22:07.309777   13076 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:22:07.309953   13076 start.go:241] waiting for startup goroutines ...
	I1014 07:22:07.309868   13076 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 07:22:07.310143   13076 addons.go:69] Setting storage-provisioner=true in profile "ha-132600"
	I1014 07:22:07.310235   13076 addons.go:234] Setting addon storage-provisioner=true in "ha-132600"
	I1014 07:22:07.310143   13076 addons.go:69] Setting default-storageclass=true in profile "ha-132600"
	I1014 07:22:07.310402   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:22:07.310402   13076 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-132600"
	I1014 07:22:07.310402   13076 host.go:66] Checking if "ha-132600" exists ...
	I1014 07:22:07.311457   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:07.311926   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:07.495748   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.96.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 07:22:08.130501   13076 start.go:971] {"host.minikube.internal": 172.20.96.1} host record injected into CoreDNS's ConfigMap
	I1014 07:22:09.594501   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:09.594501   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:09.594786   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:09.594878   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:09.595740   13076 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:22:09.596691   13076 kapi.go:59] client config for ha-132600: &rest.Config{Host:"https://172.20.111.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-132600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-132600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2926ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 07:22:09.597939   13076 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:22:09.598267   13076 cert_rotation.go:140] Starting client certificate rotation controller
	I1014 07:22:09.598703   13076 addons.go:234] Setting addon default-storageclass=true in "ha-132600"
	I1014 07:22:09.598956   13076 host.go:66] Checking if "ha-132600" exists ...
	I1014 07:22:09.600143   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:09.600143   13076 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 07:22:09.600247   13076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 07:22:09.600352   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:11.901872   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:11.901872   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:11.902855   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:22:11.913525   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:11.914554   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:11.914725   13076 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 07:22:11.914841   13076 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 07:22:11.915002   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:14.163865   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:14.163865   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:14.163865   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:22:14.597752   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:22:14.597752   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:14.598273   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:22:14.755906   13076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 07:22:16.766331   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:22:16.766798   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:16.767079   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:22:16.907784   13076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 07:22:17.107355   13076 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 07:22:17.107355   13076 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 07:22:17.108322   13076 round_trippers.go:463] GET https://172.20.111.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1014 07:22:17.108322   13076 round_trippers.go:469] Request Headers:
	I1014 07:22:17.108322   13076 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:22:17.108322   13076 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:22:17.120122   13076 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1014 07:22:17.120927   13076 round_trippers.go:463] PUT https://172.20.111.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1014 07:22:17.121006   13076 round_trippers.go:469] Request Headers:
	I1014 07:22:17.121006   13076 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:22:17.121006   13076 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:22:17.121089   13076 round_trippers.go:473]     Content-Type: application/json
	I1014 07:22:17.124876   13076 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 07:22:17.128782   13076 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1014 07:22:17.133792   13076 addons.go:510] duration metric: took 9.8239112s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1014 07:22:17.133792   13076 start.go:246] waiting for cluster config update ...
	I1014 07:22:17.133792   13076 start.go:255] writing updated cluster config ...
	I1014 07:22:17.140793   13076 out.go:201] 
	I1014 07:22:17.150794   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:22:17.151784   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:22:17.157758   13076 out.go:177] * Starting "ha-132600-m02" control-plane node in "ha-132600" cluster
	I1014 07:22:17.160381   13076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:22:17.160381   13076 cache.go:56] Caching tarball of preloaded images
	I1014 07:22:17.160381   13076 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1014 07:22:17.160381   13076 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:22:17.160381   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:22:17.168672   13076 start.go:360] acquireMachinesLock for ha-132600-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:22:17.169194   13076 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-132600-m02"
	I1014 07:22:17.169334   13076 start.go:93] Provisioning new machine with config: &{Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:22:17.169334   13076 start.go:125] createHost starting for "m02" (driver="hyperv")
	I1014 07:22:17.171957   13076 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:22:17.171957   13076 start.go:159] libmachine.API.Create for "ha-132600" (driver="hyperv")
	I1014 07:22:17.171957   13076 client.go:168] LocalClient.Create starting
	I1014 07:22:17.171957   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:22:17.173802   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:22:17.173802   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1014 07:22:19.094383   13076 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1014 07:22:19.094383   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:19.094970   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1014 07:22:20.808796   13076 main.go:141] libmachine: [stdout =====>] : False
	
	I1014 07:22:20.808796   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:20.808897   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:22:22.292959   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:22:22.292959   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:22.293074   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:22:25.811891   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:22:25.811891   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:25.813583   13076 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 07:22:26.334218   13076 main.go:141] libmachine: Creating SSH key...
	I1014 07:22:26.579833   13076 main.go:141] libmachine: Creating VM...
	I1014 07:22:26.580305   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:22:29.402237   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:22:29.402237   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:29.402237   13076 main.go:141] libmachine: Using switch "Default Switch"
	I1014 07:22:29.402237   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:22:31.181025   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:22:31.181025   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:31.181025   13076 main.go:141] libmachine: Creating VHD
	I1014 07:22:31.181025   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I1014 07:22:34.986705   13076 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 9BEBB030-6569-4A5D-A43D-C976E242B24D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1014 07:22:34.986953   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:34.986953   13076 main.go:141] libmachine: Writing magic tar header
	I1014 07:22:34.986953   13076 main.go:141] libmachine: Writing SSH key tar header
	I1014 07:22:34.998864   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I1014 07:22:38.117497   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:38.117497   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:38.117497   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\disk.vhd' -SizeBytes 20000MB
	I1014 07:22:40.750048   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:40.750048   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:40.750048   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-132600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1014 07:22:44.293371   13076 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-132600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1014 07:22:44.294009   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:44.294158   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-132600-m02 -DynamicMemoryEnabled $false
	I1014 07:22:46.495846   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:46.496106   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:46.496211   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-132600-m02 -Count 2
	I1014 07:22:48.637702   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:48.637702   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:48.637702   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-132600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\boot2docker.iso'
	I1014 07:22:51.128606   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:51.128606   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:51.128606   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-132600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\disk.vhd'
	I1014 07:22:53.765202   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:53.765775   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:53.765775   13076 main.go:141] libmachine: Starting VM...
	I1014 07:22:53.765847   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-132600-m02
	I1014 07:22:56.961396   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:56.961396   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:56.962040   13076 main.go:141] libmachine: Waiting for host to start...
	I1014 07:22:56.962040   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:22:59.251592   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:59.251592   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:59.251838   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:01.758067   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:01.758159   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:02.758293   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:04.921490   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:04.921565   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:04.921802   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:07.441595   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:07.441595   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:08.443053   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:10.659721   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:10.659721   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:10.660418   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:13.126618   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:13.126952   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:14.127183   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:16.307414   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:16.307414   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:16.307500   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:18.748902   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:18.749678   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:19.749962   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:21.893323   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:21.893323   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:21.893323   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:24.464351   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:24.465392   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:24.465471   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:26.567626   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:26.567626   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:26.567626   13076 machine.go:93] provisionDockerMachine start ...
	I1014 07:23:26.568400   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:28.681591   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:28.681591   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:28.682618   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:31.178308   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:31.178308   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:31.192309   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:31.207523   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:31.208024   13076 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 07:23:31.337861   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 07:23:31.337861   13076 buildroot.go:166] provisioning hostname "ha-132600-m02"
	I1014 07:23:31.337941   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:33.402839   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:33.402839   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:33.403340   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:35.894312   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:35.894312   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:35.901210   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:35.901699   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:35.901823   13076 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-132600-m02 && echo "ha-132600-m02" | sudo tee /etc/hostname
	I1014 07:23:36.071040   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-132600-m02
	
	I1014 07:23:36.071040   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:38.150707   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:38.151729   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:38.151729   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:40.639242   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:40.639242   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:40.644598   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:40.645419   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:40.645490   13076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-132600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-132600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-132600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 07:23:40.801974   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 07:23:40.801974   13076 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1014 07:23:40.801974   13076 buildroot.go:174] setting up certificates
	I1014 07:23:40.801974   13076 provision.go:84] configureAuth start
	I1014 07:23:40.801974   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:42.869101   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:42.869101   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:42.869101   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:45.382211   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:45.382688   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:45.382688   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:47.501182   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:47.501778   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:47.501832   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:50.008838   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:50.008838   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:50.009193   13076 provision.go:143] copyHostCerts
	I1014 07:23:50.009346   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1014 07:23:50.009346   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1014 07:23:50.009346   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1014 07:23:50.010015   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1014 07:23:50.011395   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1014 07:23:50.011967   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1014 07:23:50.011967   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1014 07:23:50.011967   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1014 07:23:50.013212   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1014 07:23:50.014109   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1014 07:23:50.014109   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1014 07:23:50.014507   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I1014 07:23:50.015649   13076 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-132600-m02 san=[127.0.0.1 172.20.111.83 ha-132600-m02 localhost minikube]
	I1014 07:23:50.130152   13076 provision.go:177] copyRemoteCerts
	I1014 07:23:50.140319   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 07:23:50.140319   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:52.242533   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:52.242533   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:52.243067   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:54.757999   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:54.758132   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:54.758132   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:23:54.867658   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7273333s)
	I1014 07:23:54.867658   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1014 07:23:54.867658   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 07:23:54.915126   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1014 07:23:54.915808   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 07:23:54.962775   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1014 07:23:54.963449   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 07:23:55.012733   13076 provision.go:87] duration metric: took 14.2107408s to configureAuth
	I1014 07:23:55.012733   13076 buildroot.go:189] setting minikube options for container-runtime
	I1014 07:23:55.013734   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:23:55.013734   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:57.174688   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:57.174688   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:57.175736   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:59.668037   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:59.668597   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:59.675074   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:59.675608   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:59.675873   13076 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1014 07:23:59.826626   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1014 07:23:59.826626   13076 buildroot.go:70] root file system type: tmpfs
	I1014 07:23:59.826926   13076 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1014 07:23:59.827038   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:01.963890   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:01.964568   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:01.964568   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:04.515125   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:04.515943   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:04.521824   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:04.522248   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:04.522248   13076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.20.108.120"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1014 07:24:04.691427   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.20.108.120
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1014 07:24:04.692090   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:06.819823   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:06.820339   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:06.820339   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:09.321630   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:09.321630   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:09.326748   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:09.327748   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:09.327808   13076 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1014 07:24:11.561493   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1014 07:24:11.561493   13076 machine.go:96] duration metric: took 44.993809s to provisionDockerMachine
	I1014 07:24:11.561493   13076 client.go:171] duration metric: took 1m54.3893961s to LocalClient.Create
	I1014 07:24:11.561493   13076 start.go:167] duration metric: took 1m54.3893961s to libmachine.API.Create "ha-132600"
	I1014 07:24:11.561493   13076 start.go:293] postStartSetup for "ha-132600-m02" (driver="hyperv")
	I1014 07:24:11.561493   13076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 07:24:11.572490   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 07:24:11.572490   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:13.652544   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:13.653401   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:13.653599   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:16.188638   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:16.188638   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:16.189304   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:24:16.298812   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7262867s)
	I1014 07:24:16.310231   13076 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 07:24:16.316927   13076 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 07:24:16.316927   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1014 07:24:16.317465   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1014 07:24:16.318548   13076 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> 9362.pem in /etc/ssl/certs
	I1014 07:24:16.318548   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /etc/ssl/certs/9362.pem
	I1014 07:24:16.332285   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 07:24:16.351469   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /etc/ssl/certs/9362.pem (1708 bytes)
	I1014 07:24:16.401820   13076 start.go:296] duration metric: took 4.8403207s for postStartSetup
	I1014 07:24:16.404096   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:18.500218   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:18.501234   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:18.501234   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:20.974982   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:20.975125   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:20.975125   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:24:20.978124   13076 start.go:128] duration metric: took 2m3.8086365s to createHost
	I1014 07:24:20.978209   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:23.112937   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:23.113001   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:23.113001   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:25.636675   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:25.637206   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:25.641953   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:25.642670   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:25.642670   13076 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 07:24:25.768945   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728915865.766702127
	
	I1014 07:24:25.769174   13076 fix.go:216] guest clock: 1728915865.766702127
	I1014 07:24:25.769174   13076 fix.go:229] Guest: 2024-10-14 07:24:25.766702127 -0700 PDT Remote: 2024-10-14 07:24:20.978124 -0700 PDT m=+320.730336401 (delta=4.788578127s)
	I1014 07:24:25.769174   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:27.845838   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:27.845838   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:27.845838   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:30.364175   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:30.364263   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:30.369382   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:30.369967   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:30.370049   13076 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1728915865
	I1014 07:24:30.508807   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 14 14:24:25 UTC 2024
	
	I1014 07:24:30.508807   13076 fix.go:236] clock set: Mon Oct 14 14:24:25 UTC 2024
	 (err=<nil>)
	I1014 07:24:30.508807   13076 start.go:83] releasing machines lock for "ha-132600-m02", held for 2m13.3394475s
	I1014 07:24:30.508807   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:32.570516   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:32.570516   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:32.570516   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:35.081338   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:35.081338   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:35.084330   13076 out.go:177] * Found network options:
	I1014 07:24:35.087197   13076 out.go:177]   - NO_PROXY=172.20.108.120
	W1014 07:24:35.089531   13076 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 07:24:35.091879   13076 out.go:177]   - NO_PROXY=172.20.108.120
	W1014 07:24:35.094409   13076 proxy.go:119] fail to check proxy env: Error ip not in block
	W1014 07:24:35.095622   13076 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 07:24:35.098960   13076 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1014 07:24:35.099124   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:35.109488   13076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1014 07:24:35.109488   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:37.293745   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:37.293818   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:37.293870   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:37.344951   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:37.344951   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:37.344951   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:39.933767   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:39.934139   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:39.934445   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:24:39.999458   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:39.999458   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:39.999458   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:24:40.039802   13076 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.930308s)
	W1014 07:24:40.039802   13076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 07:24:40.051889   13076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 07:24:40.057601   13076 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9585844s)
	W1014 07:24:40.057601   13076 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1014 07:24:40.090774   13076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 07:24:40.090774   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:24:40.091315   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:24:40.139757   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1014 07:24:40.172033   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1014 07:24:40.178804   13076 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1014 07:24:40.178804   13076 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1014 07:24:40.194040   13076 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 07:24:40.209169   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 07:24:40.241042   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:24:40.273436   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 07:24:40.304842   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:24:40.336705   13076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 07:24:40.371635   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 07:24:40.404378   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 07:24:40.435805   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 07:24:40.485319   13076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 07:24:40.505473   13076 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 07:24:40.516975   13076 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 07:24:40.550905   13076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 07:24:40.581632   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:24:40.785773   13076 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 07:24:40.819978   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:24:40.830992   13076 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1014 07:24:40.876801   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:24:40.913930   13076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 07:24:40.966444   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:24:41.006432   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:24:41.044531   13076 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 07:24:41.114617   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:24:41.139995   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:24:41.187779   13076 ssh_runner.go:195] Run: which cri-dockerd
	I1014 07:24:41.207367   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1014 07:24:41.226988   13076 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1014 07:24:41.272924   13076 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1014 07:24:41.470204   13076 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1014 07:24:41.650925   13076 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1014 07:24:41.650925   13076 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1014 07:24:41.694417   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:24:41.893295   13076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:25:43.008408   13076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.115034s)
	I1014 07:25:43.020423   13076 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1014 07:25:43.059257   13076 out.go:201] 
	W1014 07:25:43.062124   13076 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 14 14:24:09 ha-132600-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.938489944Z" level=info msg="Starting up"
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.941531731Z" level=info msg="containerd not running, starting managed containerd"
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.942638427Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	Oct 14 14:24:09 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:09.976195986Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.003978070Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004049170Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004119469Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004233069Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004318268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004457268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004664767Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004761167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004782067Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004795467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004887666Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.005331964Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008453752Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008545052Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008673551Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008759151Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008859951Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008988550Z" level=info msg="metadata content store policy set" policy=shared
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034095951Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034335550Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034407650Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034432250Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034449450Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034603849Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034945948Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035129847Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035263947Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035294447Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035312846Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035328546Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035343646Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035419846Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035443346Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035495946Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035526946Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035542746Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035565345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035583545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035598745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035630745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035648245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035663945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035689645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035709545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035731545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035750545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035764245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035777845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035792145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035809345Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035833944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035849444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035864444Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036025444Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036057644Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036074443Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036087843Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036099043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036115343Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036131843Z" level=info msg="NRI interface is disabled by configuration."
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036482842Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036737141Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036799541Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036822041Z" level=info msg="containerd successfully booted in 0.062283s"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.009541514Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.046546477Z" level=info msg="Loading containers: start."
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.205429391Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.426461627Z" level=info msg="Loading containers: done."
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448023414Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448125314Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448200613Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448511212Z" level=info msg="Daemon has completed initialization"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.556153647Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.556338446Z" level=info msg="API listen on [::]:2376"
	Oct 14 14:24:11 ha-132600-m02 systemd[1]: Started Docker Application Container Engine.
	Oct 14 14:24:41 ha-132600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.916201367Z" level=info msg="Processing signal 'terminated'"
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.918697964Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919058063Z" level=info msg="Daemon shutdown complete"
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919215663Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919252063Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: docker.service: Deactivated successfully.
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 14 14:24:42 ha-132600-m02 dockerd[1075]: time="2024-10-14T14:24:42.978494717Z" level=info msg="Starting up"
	Oct 14 14:25:43 ha-132600-m02 dockerd[1075]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1014 07:25:43.062124   13076 out.go:270] * 
	W1014 07:25:43.062827   13076 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:25:43.067027   13076 out.go:201] 
	
	
	==> Docker <==
	Oct 14 14:22:32 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:22:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d7137fb8ec569089599a03e11d18eda509d210d29fa54b5e6a0a8d7dd7a54e7f/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 14:22:32 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:22:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/90ffeb0ce1aafcb639832f7144bd2dbb0b5c9deb53555b49e386b3d3e96d8bc3/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 14:22:32 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:22:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7278ecf7cc9466e1fee2170d699055f6498d2ece8da31c4b3696ed85b3cd38cf/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922383735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922463635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922481235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922580735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.996609949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.996807948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.996907948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.998275745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.069443031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.070022429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.070369527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.076841098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675114485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675249284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675264584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675974482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:17 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:26:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/91751edf004eb5236aa2443a5cff30bdc82ba05d39a3583d9026a4f19faba52f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 14 14:26:19 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:26:19Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.456978196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.457173796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.457616296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.457980197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2b8d44d586369       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   22 minutes ago      Running             busybox                   0                   91751edf004eb       busybox-7dff88458-kr92j
	5a6196684fc6e       c69fa2e9cbf5f                                                                                         26 minutes ago      Running             coredns                   0                   7278ecf7cc946       coredns-7c65d6cfc9-4qfrq
	81d6fdac8115f       6e38f40d628db                                                                                         26 minutes ago      Running             storage-provisioner       0                   90ffeb0ce1aaf       storage-provisioner
	52c3e5370a6c8       c69fa2e9cbf5f                                                                                         26 minutes ago      Running             coredns                   0                   d7137fb8ec569       coredns-7c65d6cfc9-zf6cd
	dae2c0aa67af3       kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387              26 minutes ago      Running             kindnet-cni               0                   277e64b59b8aa       kindnet-rkjqr
	4745a4b0dc379       60c005f310ff3                                                                                         26 minutes ago      Running             kube-proxy                0                   15f605d31df67       kube-proxy-zkbj8
	b08f7a3c2a5a6       ghcr.io/kube-vip/kube-vip@sha256:9e23baad11ae3e69d739430b9fdb60df22356b7da4b4f4e458fae0541619deb4     26 minutes ago      Running             kube-vip                  0                   46cfd7b278d40       kube-vip-ha-132600
	84fd76d493d80       2e96e5913fc06                                                                                         26 minutes ago      Running             etcd                      0                   d71566d00e2f9       etcd-ha-132600
	b661cb6713103       6bab7719df100                                                                                         26 minutes ago      Running             kube-apiserver            0                   cb18be6165830       kube-apiserver-ha-132600
	35c870864a800       9aa1fad941575                                                                                         26 minutes ago      Running             kube-scheduler            0                   36593363b631a       kube-scheduler-ha-132600
	4a8cce31aa3a2       175ffd71cce3d                                                                                         26 minutes ago      Running             kube-controller-manager   0                   f5fa69fe68b07       kube-controller-manager-ha-132600
	
	
	==> coredns [52c3e5370a6c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41885 - 43638 "HINFO IN 5147527986633681541.7511835962423692488. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.048565283s
	[INFO] 10.244.0.4:49801 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0004036s
	[INFO] 10.244.0.4:57121 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.178873632s
	[INFO] 10.244.0.4:54985 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000301s
	[INFO] 10.244.0.4:47603 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000273799s
	[INFO] 10.244.0.4:50030 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000276399s
	[INFO] 10.244.0.4:38124 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000439399s
	[INFO] 10.244.0.4:43764 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000347699s
	[INFO] 10.244.0.4:53583 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0003446s
	[INFO] 10.244.0.4:50771 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0002502s
	[INFO] 10.244.0.4:32805 - 5 "PTR IN 1.96.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0002399s
	
	
	==> coredns [5a6196684fc6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50464 - 22593 "HINFO IN 8090798371432773748.7872157757733337044. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.084180423s
	[INFO] 10.244.0.4:46373 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.004105792s
	[INFO] 10.244.0.4:41175 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.166814894s
	[INFO] 10.244.0.4:43040 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002867s
	[INFO] 10.244.0.4:33549 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.045685805s
	[INFO] 10.244.0.4:60793 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000884598s
	[INFO] 10.244.0.4:57558 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001339s
	[INFO] 10.244.0.4:33138 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.040147217s
	[INFO] 10.244.0.4:46303 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0002673s
	[INFO] 10.244.0.4:60761 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001771s
	[INFO] 10.244.0.4:55567 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000249599s
	
	
	==> describe nodes <==
	Name:               ha-132600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-132600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=ha-132600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_14T07_22_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 14:22:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-132600
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:48:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 14:47:05 +0000   Mon, 14 Oct 2024 14:22:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 14:47:05 +0000   Mon, 14 Oct 2024 14:22:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 14:47:05 +0000   Mon, 14 Oct 2024 14:22:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 14:47:05 +0000   Mon, 14 Oct 2024 14:22:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.108.120
	  Hostname:    ha-132600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 07bcc58a0c7e4f0eaae96253dcf0a4ae
	  System UUID:                e06d00fc-5ebc-1a49-bf31-4c6ce500fe9c
	  Boot ID:                    215d765d-07e3-4bb7-ba3c-58ab142d5afa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kr92j              0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 coredns-7c65d6cfc9-4qfrq             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     26m
	  kube-system                 coredns-7c65d6cfc9-zf6cd             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     26m
	  kube-system                 etcd-ha-132600                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         26m
	  kube-system                 kindnet-rkjqr                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      26m
	  kube-system                 kube-apiserver-ha-132600             250m (12%)    0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-controller-manager-ha-132600    200m (10%)    0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-proxy-zkbj8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-scheduler-ha-132600             100m (5%)     0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 kube-vip-ha-132600                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26m   kube-proxy       
	  Normal  Starting                 26m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  26m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  26m   kubelet          Node ha-132600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26m   kubelet          Node ha-132600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26m   kubelet          Node ha-132600 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26m   node-controller  Node ha-132600 event: Registered Node ha-132600 in Controller
	  Normal  NodeReady                26m   kubelet          Node ha-132600 status is now: NodeReady
	
	
	Name:               ha-132600-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-132600-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=ha-132600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_14T07_42_14_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 14:42:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-132600-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:48:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 14:48:21 +0000   Mon, 14 Oct 2024 14:42:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 14:48:21 +0000   Mon, 14 Oct 2024 14:42:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 14:48:21 +0000   Mon, 14 Oct 2024 14:42:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 14:48:21 +0000   Mon, 14 Oct 2024 14:42:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.111.174
	  Hostname:    ha-132600-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 3833a05e76bc4842ac73a1d2223670ec
	  System UUID:                f1d27ade-116b-0642-ac95-727d62870b2a
	  Boot ID:                    07765716-ab52-4f7d-8d78-8fbd996c8cc0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8thz6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kindnet-dznf8              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m25s
	  kube-system                 kube-proxy-q6wxd           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m12s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m26s (x2 over 6m26s)  kubelet          Node ha-132600-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  6m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     6m25s                  cidrAllocator    Node ha-132600-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasNoDiskPressure    6m25s (x2 over 6m26s)  kubelet          Node ha-132600-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m25s (x2 over 6m26s)  kubelet          Node ha-132600-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m22s                  node-controller  Node ha-132600-m03 event: Registered Node ha-132600-m03 in Controller
	  Normal  NodeReady                5m53s                  kubelet          Node ha-132600-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.050916] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +45.956887] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.179434] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[Oct14 14:21] systemd-fstab-generator[997]: Ignoring "noauto" option for root device
	[  +0.095957] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.527367] systemd-fstab-generator[1037]: Ignoring "noauto" option for root device
	[  +0.185829] systemd-fstab-generator[1049]: Ignoring "noauto" option for root device
	[  +0.220100] systemd-fstab-generator[1063]: Ignoring "noauto" option for root device
	[  +2.802946] systemd-fstab-generator[1277]: Ignoring "noauto" option for root device
	[  +0.209648] systemd-fstab-generator[1289]: Ignoring "noauto" option for root device
	[  +0.207153] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.288223] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[ +11.411364] systemd-fstab-generator[1416]: Ignoring "noauto" option for root device
	[  +0.107418] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.752049] systemd-fstab-generator[1674]: Ignoring "noauto" option for root device
	[  +6.264384] systemd-fstab-generator[1823]: Ignoring "noauto" option for root device
	[  +0.102334] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.637673] kauditd_printk_skb: 67 callbacks suppressed
	[Oct14 14:22] systemd-fstab-generator[2317]: Ignoring "noauto" option for root device
	[  +5.137329] kauditd_printk_skb: 17 callbacks suppressed
	[  +8.104433] kauditd_printk_skb: 29 callbacks suppressed
	[Oct14 14:26] kauditd_printk_skb: 28 callbacks suppressed
	[Oct14 14:42] hrtimer: interrupt took 6612884 ns
	
	
	==> etcd [84fd76d493d8] <==
	{"level":"warn","ts":"2024-10-14T14:42:06.838564Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.106689ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T14:42:06.838765Z","caller":"traceutil/trace.go:171","msg":"trace[744257141] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2578; }","duration":"211.373088ms","start":"2024-10-14T14:42:06.627381Z","end":"2024-10-14T14:42:06.838754Z","steps":["trace[744257141] 'agreement among raft nodes before linearized reading'  (duration: 210.908189ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:07.474608Z","caller":"traceutil/trace.go:171","msg":"trace[1611887997] linearizableReadLoop","detail":"{readStateIndex:2842; appliedIndex:2841; }","duration":"173.05738ms","start":"2024-10-14T14:42:07.301532Z","end":"2024-10-14T14:42:07.474589Z","steps":["trace[1611887997] 'read index received'  (duration: 172.879381ms)","trace[1611887997] 'applied index is now lower than readState.Index'  (duration: 177.299µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-14T14:42:07.474817Z","caller":"traceutil/trace.go:171","msg":"trace[1935321534] transaction","detail":"{read_only:false; response_revision:2579; number_of_response:1; }","duration":"268.831248ms","start":"2024-10-14T14:42:07.205940Z","end":"2024-10-14T14:42:07.474771Z","steps":["trace[1935321534] 'process raft request'  (duration: 268.413149ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T14:42:07.475266Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.720679ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T14:42:07.475322Z","caller":"traceutil/trace.go:171","msg":"trace[213807606] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2579; }","duration":"173.789679ms","start":"2024-10-14T14:42:07.301524Z","end":"2024-10-14T14:42:07.475314Z","steps":["trace[213807606] 'agreement among raft nodes before linearized reading'  (duration: 173.704979ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:19.018144Z","caller":"traceutil/trace.go:171","msg":"trace[410087756] transaction","detail":"{read_only:false; response_revision:2632; number_of_response:1; }","duration":"175.067373ms","start":"2024-10-14T14:42:18.843055Z","end":"2024-10-14T14:42:19.018123Z","steps":["trace[410087756] 'process raft request'  (duration: 174.514675ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T14:42:19.375954Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.591721ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-132600-m03\" ","response":"range_response_count:1 size:2962"}
	{"level":"info","ts":"2024-10-14T14:42:19.377451Z","caller":"traceutil/trace.go:171","msg":"trace[946791809] range","detail":"{range_begin:/registry/minions/ha-132600-m03; range_end:; response_count:1; response_revision:2632; }","duration":"239.116617ms","start":"2024-10-14T14:42:19.138318Z","end":"2024-10-14T14:42:19.377435Z","steps":["trace[946791809] 'range keys from in-memory index tree'  (duration: 237.374121ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:25.277317Z","caller":"traceutil/trace.go:171","msg":"trace[682053374] linearizableReadLoop","detail":"{readStateIndex:2917; appliedIndex:2916; }","duration":"140.762656ms","start":"2024-10-14T14:42:25.136515Z","end":"2024-10-14T14:42:25.277278Z","steps":["trace[682053374] 'read index received'  (duration: 140.586757ms)","trace[682053374] 'applied index is now lower than readState.Index'  (duration: 175.299µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-14T14:42:25.277542Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.959056ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-132600-m03\" ","response":"range_response_count:1 size:3114"}
	{"level":"info","ts":"2024-10-14T14:42:25.277634Z","caller":"traceutil/trace.go:171","msg":"trace[820348600] range","detail":"{range_begin:/registry/minions/ha-132600-m03; range_end:; response_count:1; response_revision:2648; }","duration":"141.112155ms","start":"2024-10-14T14:42:25.136511Z","end":"2024-10-14T14:42:25.277623Z","steps":["trace[820348600] 'agreement among raft nodes before linearized reading'  (duration: 140.928056ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:25.279675Z","caller":"traceutil/trace.go:171","msg":"trace[858523099] transaction","detail":"{read_only:false; response_revision:2648; number_of_response:1; }","duration":"166.194393ms","start":"2024-10-14T14:42:25.113464Z","end":"2024-10-14T14:42:25.279658Z","steps":["trace[858523099] 'process raft request'  (duration: 163.709399ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:25.921423Z","caller":"traceutil/trace.go:171","msg":"trace[45131034] transaction","detail":"{read_only:false; response_revision:2649; number_of_response:1; }","duration":"240.489012ms","start":"2024-10-14T14:42:25.680919Z","end":"2024-10-14T14:42:25.921408Z","steps":["trace[45131034] 'process raft request'  (duration: 240.146113ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:31.076106Z","caller":"traceutil/trace.go:171","msg":"trace[1663891298] transaction","detail":"{read_only:false; response_revision:2663; number_of_response:1; }","duration":"120.948703ms","start":"2024-10-14T14:42:30.955138Z","end":"2024-10-14T14:42:31.076086Z","steps":["trace[1663891298] 'process raft request'  (duration: 120.791304ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T14:42:31.287943Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.945532ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-132600-m03\" ","response":"range_response_count:1 size:3114"}
	{"level":"info","ts":"2024-10-14T14:42:31.288066Z","caller":"traceutil/trace.go:171","msg":"trace[1149000350] range","detail":"{range_begin:/registry/minions/ha-132600-m03; range_end:; response_count:1; response_revision:2663; }","duration":"150.051932ms","start":"2024-10-14T14:42:31.137978Z","end":"2024-10-14T14:42:31.288030Z","steps":["trace[1149000350] 'range keys from in-memory index tree'  (duration: 149.753933ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:32.313366Z","caller":"traceutil/trace.go:171","msg":"trace[1148264888] linearizableReadLoop","detail":"{readStateIndex:2939; appliedIndex:2938; }","duration":"175.689169ms","start":"2024-10-14T14:42:32.137533Z","end":"2024-10-14T14:42:32.313222Z","steps":["trace[1148264888] 'read index received'  (duration: 175.27797ms)","trace[1148264888] 'applied index is now lower than readState.Index'  (duration: 410.299µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-14T14:42:32.313766Z","caller":"traceutil/trace.go:171","msg":"trace[53442055] transaction","detail":"{read_only:false; response_revision:2667; number_of_response:1; }","duration":"341.166363ms","start":"2024-10-14T14:42:31.972588Z","end":"2024-10-14T14:42:32.313754Z","steps":["trace[53442055] 'process raft request'  (duration: 340.380665ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T14:42:32.314054Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-14T14:42:31.972576Z","time spent":"341.263763ms","remote":"127.0.0.1:55456","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1094,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:2661 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1021 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-14T14:42:32.314975Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.510264ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-132600-m03\" ","response":"range_response_count:1 size:3114"}
	{"level":"info","ts":"2024-10-14T14:42:32.315174Z","caller":"traceutil/trace.go:171","msg":"trace[597120152] range","detail":"{range_begin:/registry/minions/ha-132600-m03; range_end:; response_count:1; response_revision:2667; }","duration":"177.711264ms","start":"2024-10-14T14:42:32.137452Z","end":"2024-10-14T14:42:32.315163Z","steps":["trace[597120152] 'agreement among raft nodes before linearized reading'  (duration: 177.444264ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:46:56.596123Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2559}
	{"level":"info","ts":"2024-10-14T14:46:56.611577Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2559,"took":"14.540461ms","hash":3178179299,"current-db-size-bytes":2416640,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2015232,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-10-14T14:46:56.611712Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3178179299,"revision":2559,"compact-revision":2024}
	
	
	==> kernel <==
	 14:48:39 up 28 min,  0 users,  load average: 0.94, 0.60, 0.43
	Linux ha-132600 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [dae2c0aa67af] <==
	I1014 14:47:37.567033       1 main.go:300] handling current node
	I1014 14:47:47.572194       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:47:47.572258       1 main.go:300] handling current node
	I1014 14:47:47.572278       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:47:47.572285       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:47:57.571086       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:47:57.571136       1 main.go:300] handling current node
	I1014 14:47:57.571176       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:47:57.571190       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:48:07.571486       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:48:07.571552       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:48:07.572044       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:48:07.572135       1 main.go:300] handling current node
	I1014 14:48:17.563030       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:48:17.563150       1 main.go:300] handling current node
	I1014 14:48:17.563179       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:48:17.563207       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:48:27.566268       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:48:27.566375       1 main.go:300] handling current node
	I1014 14:48:27.566397       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:48:27.566404       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:48:37.563371       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:48:37.563509       1 main.go:300] handling current node
	I1014 14:48:37.563558       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:48:37.563567       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [b661cb671310] <==
	I1014 14:21:58.925388       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1014 14:21:58.925426       1 policy_source.go:224] refreshing policies
	I1014 14:21:58.928296       1 controller.go:615] quota admission added evaluator for: namespaces
	E1014 14:21:58.950810       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1014 14:21:59.174958       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 14:21:59.819127       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1014 14:21:59.829797       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1014 14:21:59.829835       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 14:22:00.942976       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 14:22:01.015766       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 14:22:01.150423       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1014 14:22:01.164901       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.20.108.120]
	I1014 14:22:01.165818       1 controller.go:615] quota admission added evaluator for: endpoints
	I1014 14:22:01.175247       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 14:22:01.835583       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1014 14:22:03.554010       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1014 14:22:03.589407       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1014 14:22:03.617368       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1014 14:22:07.346752       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1014 14:22:07.516653       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1014 14:38:03.250866       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57382: use of closed network connection
	E1014 14:38:04.469699       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57390: use of closed network connection
	E1014 14:38:05.596658       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57398: use of closed network connection
	E1014 14:38:40.228042       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57419: use of closed network connection
	E1014 14:38:50.673483       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57421: use of closed network connection
	
	
	==> kube-controller-manager [4a8cce31aa3a] <==
	I1014 14:42:14.099308       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-132600-m03" podCIDRs=["10.244.1.0/24"]
	I1014 14:42:14.099376       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	E1014 14:42:14.131350       1 range_allocator.go:427] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"ha-132600-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.2.0/24\", \"10.244.1.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-132600-m03" podCIDRs=["10.244.2.0/24"]
	E1014 14:42:14.131490       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"ha-132600-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.2.0/24\", \"10.244.1.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-132600-m03"
	E1014 14:42:14.131543       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'ha-132600-m03': failed to patch node CIDR: Node \"ha-132600-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.2.0/24\", \"10.244.1.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I1014 14:42:14.132334       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:14.137556       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:14.252734       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:14.858650       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:17.081634       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-132600-m03"
	I1014 14:42:17.154621       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:24.267607       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:44.660569       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:46.187605       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-132600-m03"
	I1014 14:42:46.189657       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:46.206592       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:46.221387       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51µs"
	I1014 14:42:46.232286       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51.3µs"
	I1014 14:42:46.252360       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="196µs"
	I1014 14:42:47.108442       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:49.071073       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="17.613156ms"
	I1014 14:42:49.071582       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.8µs"
	I1014 14:43:14.925968       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:47:05.189799       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600"
	I1014 14:48:21.846100       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	
	
	==> kube-proxy [4745a4b0dc37] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1014 14:22:09.024513       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1014 14:22:09.047174       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.20.108.120"]
	E1014 14:22:09.047308       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 14:22:09.124346       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 14:22:09.124494       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 14:22:09.124529       1 server_linux.go:169] "Using iptables Proxier"
	I1014 14:22:09.128809       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 14:22:09.129721       1 server.go:483] "Version info" version="v1.31.1"
	I1014 14:22:09.129805       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 14:22:09.131652       1 config.go:199] "Starting service config controller"
	I1014 14:22:09.131988       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 14:22:09.132269       1 config.go:105] "Starting endpoint slice config controller"
	I1014 14:22:09.132619       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 14:22:09.133592       1 config.go:328] "Starting node config controller"
	I1014 14:22:09.135184       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 14:22:09.232621       1 shared_informer.go:320] Caches are synced for service config
	I1014 14:22:09.233933       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 14:22:09.236054       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [35c870864a80] <==
	W1014 14:21:59.944424       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1014 14:21:59.944452       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:21:59.979078       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1014 14:21:59.979365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.111250       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1014 14:22:00.111664       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.192015       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1014 14:22:00.192346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.196984       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1014 14:22:00.197112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.197208       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1014 14:22:00.197241       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.212172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1014 14:22:00.212214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.260144       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1014 14:22:00.261002       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.276235       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1014 14:22:00.276419       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.295031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1014 14:22:00.295892       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.311664       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1014 14:22:00.313212       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1014 14:22:00.354351       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1014 14:22:00.355204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 14:22:02.117335       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 14 14:44:03 ha-132600 kubelet[2324]: E1014 14:44:03.676935    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:44:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:44:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:44:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:44:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:45:03 ha-132600 kubelet[2324]: E1014 14:45:03.682657    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:45:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:45:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:45:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:45:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:46:03 ha-132600 kubelet[2324]: E1014 14:46:03.687832    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:46:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:46:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:46:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:46:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:47:03 ha-132600 kubelet[2324]: E1014 14:47:03.679016    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:47:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:47:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:47:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:47:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:48:03 ha-132600 kubelet[2324]: E1014 14:48:03.676932    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:48:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:48:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:48:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:48:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-132600 -n ha-132600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-132600 -n ha-132600: (11.7889061s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-132600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-rng7p
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-132600 describe pod busybox-7dff88458-rng7p
helpers_test.go:282: (dbg) kubectl --context ha-132600 describe pod busybox-7dff88458-rng7p:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-rng7p
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9hpj2 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-9hpj2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  7m19s (x4 over 22m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  49s (x3 over 6m6s)   default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (57.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (100.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-132600 node start m02 -v=7 --alsologtostderr
E1014 07:48:53.927369     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:422: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-132600 node start m02 -v=7 --alsologtostderr: exit status 1 (7.5423027s)

                                                
                                                
-- stdout --
	* Starting "ha-132600-m02" control-plane node in "ha-132600" cluster
	* Restarting existing hyperv VM for "ha-132600-m02" ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:48:52.816934    3896 out.go:345] Setting OutFile to fd 1044 ...
	I1014 07:48:52.818807    3896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:48:52.818807    3896 out.go:358] Setting ErrFile to fd 1196...
	I1014 07:48:52.819025    3896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:48:52.834457    3896 mustload.go:65] Loading cluster: ha-132600
	I1014 07:48:52.835412    3896 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:48:52.835812    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:48:54.891055    3896 main.go:141] libmachine: [stdout =====>] : Off
	
	I1014 07:48:54.891055    3896 main.go:141] libmachine: [stderr =====>] : 
	W1014 07:48:54.891055    3896 host.go:58] "ha-132600-m02" host status: Stopped
	I1014 07:48:54.895625    3896 out.go:177] * Starting "ha-132600-m02" control-plane node in "ha-132600" cluster
	I1014 07:48:54.897855    3896 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:48:54.898504    3896 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1014 07:48:54.898504    3896 cache.go:56] Caching tarball of preloaded images
	I1014 07:48:54.898504    3896 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1014 07:48:54.899101    3896 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:48:54.899101    3896 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:48:54.902133    3896 start.go:360] acquireMachinesLock for ha-132600-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:48:54.902133    3896 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-132600-m02"
	I1014 07:48:54.902133    3896 start.go:96] Skipping create...Using existing machine configuration
	I1014 07:48:54.902133    3896 fix.go:54] fixHost starting: m02
	I1014 07:48:54.902816    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:48:56.975053    3896 main.go:141] libmachine: [stdout =====>] : Off
	
	I1014 07:48:56.975053    3896 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:48:56.975350    3896 fix.go:112] recreateIfNeeded on ha-132600-m02: state=Stopped err=<nil>
	W1014 07:48:56.975350    3896 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 07:48:56.980615    3896 out.go:177] * Restarting existing hyperv VM for "ha-132600-m02" ...
	I1014 07:48:56.982994    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-132600-m02
	I1014 07:49:00.162531    3896 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:49:00.162591    3896 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:49:00.162591    3896 main.go:141] libmachine: Waiting for host to start...
	I1014 07:49:00.162591    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state

                                                
                                                
** /stderr **
ha_test.go:424: I1014 07:48:52.816934    3896 out.go:345] Setting OutFile to fd 1044 ...
I1014 07:48:52.818807    3896 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 07:48:52.818807    3896 out.go:358] Setting ErrFile to fd 1196...
I1014 07:48:52.819025    3896 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 07:48:52.834457    3896 mustload.go:65] Loading cluster: ha-132600
I1014 07:48:52.835412    3896 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1014 07:48:52.835812    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
I1014 07:48:54.891055    3896 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I1014 07:48:54.891055    3896 main.go:141] libmachine: [stderr =====>] : 
W1014 07:48:54.891055    3896 host.go:58] "ha-132600-m02" host status: Stopped
I1014 07:48:54.895625    3896 out.go:177] * Starting "ha-132600-m02" control-plane node in "ha-132600" cluster
I1014 07:48:54.897855    3896 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I1014 07:48:54.898504    3896 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
I1014 07:48:54.898504    3896 cache.go:56] Caching tarball of preloaded images
I1014 07:48:54.898504    3896 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1014 07:48:54.899101    3896 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I1014 07:48:54.899101    3896 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
I1014 07:48:54.902133    3896 start.go:360] acquireMachinesLock for ha-132600-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1014 07:48:54.902133    3896 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-132600-m02"
I1014 07:48:54.902133    3896 start.go:96] Skipping create...Using existing machine configuration
I1014 07:48:54.902133    3896 fix.go:54] fixHost starting: m02
I1014 07:48:54.902816    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
I1014 07:48:56.975053    3896 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I1014 07:48:56.975053    3896 main.go:141] libmachine: [stderr =====>] : 
I1014 07:48:56.975350    3896 fix.go:112] recreateIfNeeded on ha-132600-m02: state=Stopped err=<nil>
W1014 07:48:56.975350    3896 fix.go:138] unexpected machine state, will restart: <nil>
I1014 07:48:56.980615    3896 out.go:177] * Restarting existing hyperv VM for "ha-132600-m02" ...
I1014 07:48:56.982994    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-132600-m02
I1014 07:49:00.162531    3896 main.go:141] libmachine: [stdout =====>] : 
I1014 07:49:00.162591    3896 main.go:141] libmachine: [stderr =====>] : 
I1014 07:49:00.162591    3896 main.go:141] libmachine: Waiting for host to start...
I1014 07:49:00.162591    3896 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-windows-amd64.exe -p ha-132600 node start m02 -v=7 --alsologtostderr": exit status 1
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-132600 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-132600 status -v=7 --alsologtostderr: context deadline exceeded (0s)
I1014 07:49:00.244761     936 retry.go:31] will retry after 1.494641901s: context deadline exceeded
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-132600 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-132600 status -v=7 --alsologtostderr: context deadline exceeded (0s)
I1014 07:49:01.740282     936 retry.go:31] will retry after 2.230725254s: context deadline exceeded
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-132600 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-132600 status -v=7 --alsologtostderr: context deadline exceeded (0s)
I1014 07:49:03.973118     936 retry.go:31] will retry after 2.770858309s: context deadline exceeded
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-132600 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-132600 status -v=7 --alsologtostderr: context deadline exceeded (0s)
I1014 07:49:06.745026     936 retry.go:31] will retry after 3.56754496s: context deadline exceeded
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-132600 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-132600 status -v=7 --alsologtostderr: context deadline exceeded (0s)
I1014 07:49:10.313751     936 retry.go:31] will retry after 7.487087569s: context deadline exceeded
E1014 07:49:10.847136     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-132600 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-132600 status -v=7 --alsologtostderr: context deadline exceeded (0s)
I1014 07:49:17.801781     936 retry.go:31] will retry after 6.877546041s: context deadline exceeded
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-132600 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-132600 status -v=7 --alsologtostderr: context deadline exceeded (0s)
I1014 07:49:24.679616     936 retry.go:31] will retry after 16.136726889s: context deadline exceeded
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-132600 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-132600 status -v=7 --alsologtostderr: context deadline exceeded (0s)
I1014 07:49:40.817062     936 retry.go:31] will retry after 19.052414533s: context deadline exceeded
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-132600 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-132600 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:434: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-132600 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-132600 -n ha-132600
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-132600 -n ha-132600: (11.8443434s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-132600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-132600 logs -n 25: (8.1330325s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:36 PDT | 14 Oct 24 07:36 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:37 PDT | 14 Oct 24 07:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:37 PDT | 14 Oct 24 07:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-8thz6 --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | busybox-7dff88458-kr92j --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-rng7p --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-8thz6 --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | busybox-7dff88458-kr92j --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-rng7p --           |           |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-8thz6 -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | busybox-7dff88458-kr92j -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-rng7p -- nslookup  |           |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- get pods -o          | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-8thz6              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT | 14 Oct 24 07:38 PDT |
	|         | busybox-7dff88458-kr92j              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-kr92j -- sh        |           |                   |         |                     |                     |
	|         | -c ping -c 1 172.20.96.1             |           |                   |         |                     |                     |
	| kubectl | -p ha-132600 -- exec                 | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:38 PDT |                     |
	|         | busybox-7dff88458-rng7p              |           |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |           |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |           |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |                   |         |                     |                     |
	| node    | add -p ha-132600 -v=7                | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:39 PDT | 14 Oct 24 07:42 PDT |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	| node    | ha-132600 node stop m02 -v=7         | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:46 PDT | 14 Oct 24 07:46 PDT |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	| node    | ha-132600 node start m02 -v=7        | ha-132600 | minikube1\jenkins | v1.34.0 | 14 Oct 24 07:48 PDT |                     |
	|         | --alsologtostderr                    |           |                   |         |                     |                     |
	|---------|--------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 07:19:00
	Running on machine: minikube1
	Binary: Built with gc go1.23.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 07:19:00.342865   13076 out.go:345] Setting OutFile to fd 1228 ...
	I1014 07:19:00.344546   13076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:19:00.344546   13076 out.go:358] Setting ErrFile to fd 824...
	I1014 07:19:00.344546   13076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:19:00.368247   13076 out.go:352] Setting JSON to false
	I1014 07:19:00.372277   13076 start.go:129] hostinfo: {"hostname":"minikube1","uptime":101054,"bootTime":1728814485,"procs":202,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5011 Build 19045.5011","kernelVersion":"10.0.19045.5011 Build 19045.5011","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1014 07:19:00.372803   13076 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:19:00.379728   13076 out.go:177] * [ha-132600] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	I1014 07:19:00.386820   13076 notify.go:220] Checking for updates...
	I1014 07:19:00.389571   13076 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:19:00.392679   13076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:19:00.395167   13076 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1014 07:19:00.398285   13076 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:19:00.400520   13076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:19:00.404118   13076 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 07:19:05.654471   13076 out.go:177] * Using the hyperv driver based on user configuration
	I1014 07:19:05.658285   13076 start.go:297] selected driver: hyperv
	I1014 07:19:05.658285   13076 start.go:901] validating driver "hyperv" against <nil>
	I1014 07:19:05.658285   13076 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 07:19:05.705211   13076 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 07:19:05.706541   13076 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 07:19:05.706541   13076 cni.go:84] Creating CNI manager for ""
	I1014 07:19:05.706541   13076 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1014 07:19:05.706541   13076 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 07:19:05.707066   13076 start.go:340] cluster config:
	{Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:19:05.707207   13076 iso.go:125] acquiring lock: {Name:mkb71635bcbccba29dce9048ad4cc71430a7e577 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 07:19:05.711264   13076 out.go:177] * Starting "ha-132600" primary control-plane node in "ha-132600" cluster
	I1014 07:19:05.715097   13076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:19:05.715262   13076 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1014 07:19:05.715344   13076 cache.go:56] Caching tarball of preloaded images
	I1014 07:19:05.715452   13076 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1014 07:19:05.715452   13076 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:19:05.716553   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:19:05.716898   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json: {Name:mkbd11bf3f90adebf1f4630d2e8deaac328748d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:19:05.717886   13076 start.go:360] acquireMachinesLock for ha-132600: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:19:05.717886   13076 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-132600"
	I1014 07:19:05.718562   13076 start.go:93] Provisioning new machine with config: &{Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:19:05.718562   13076 start.go:125] createHost starting for "" (driver="hyperv")
	I1014 07:19:05.721891   13076 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:19:05.722555   13076 start.go:159] libmachine.API.Create for "ha-132600" (driver="hyperv")
	I1014 07:19:05.722555   13076 client.go:168] LocalClient.Create starting
	I1014 07:19:05.722555   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I1014 07:19:05.723316   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:19:05.723316   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:19:05.723316   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I1014 07:19:05.723877   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:19:05.723955   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:19:05.723955   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1014 07:19:07.752927   13076 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1014 07:19:07.753576   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:07.753793   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1014 07:19:09.445838   13076 main.go:141] libmachine: [stdout =====>] : False
	
	I1014 07:19:09.445838   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:09.446315   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:19:10.901034   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:19:10.901034   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:10.901034   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:19:14.320078   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:19:14.320309   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:14.323253   13076 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 07:19:14.829443   13076 main.go:141] libmachine: Creating SSH key...
	I1014 07:19:14.988345   13076 main.go:141] libmachine: Creating VM...
	I1014 07:19:14.988345   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:19:17.790932   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:19:17.791005   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:17.791145   13076 main.go:141] libmachine: Using switch "Default Switch"
	I1014 07:19:17.791145   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:19:19.529861   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:19:19.529861   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:19.529861   13076 main.go:141] libmachine: Creating VHD
	I1014 07:19:19.530436   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\fixed.vhd' -SizeBytes 10MB -Fixed
	I1014 07:19:23.105904   13076 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 267BC11D-A195-4636-8AE9-EF1D7966C95C
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1014 07:19:23.105987   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:23.106037   13076 main.go:141] libmachine: Writing magic tar header
	I1014 07:19:23.106037   13076 main.go:141] libmachine: Writing SSH key tar header
	I1014 07:19:23.118040   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\disk.vhd' -VHDType Dynamic -DeleteSource
	I1014 07:19:26.201609   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:26.201950   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:26.202023   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\disk.vhd' -SizeBytes 20000MB
	I1014 07:19:28.768368   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:28.768789   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:28.768969   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-132600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1014 07:19:32.226690   13076 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-132600 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1014 07:19:32.226915   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:32.226915   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-132600 -DynamicMemoryEnabled $false
	I1014 07:19:34.390731   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:34.390837   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:34.391049   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-132600 -Count 2
	I1014 07:19:36.449382   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:36.449628   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:36.449628   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-132600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\boot2docker.iso'
	I1014 07:19:38.932909   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:38.932909   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:38.932909   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-132600 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\disk.vhd'
	I1014 07:19:41.509229   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:41.509229   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:41.509229   13076 main.go:141] libmachine: Starting VM...
	I1014 07:19:41.509229   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-132600
	I1014 07:19:44.718680   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:44.718680   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:44.719475   13076 main.go:141] libmachine: Waiting for host to start...
	I1014 07:19:44.719770   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:19:46.954823   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:19:46.954823   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:46.955829   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:19:49.408227   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:49.408227   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:50.409369   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:19:52.561607   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:19:52.561912   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:52.561912   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:19:55.081136   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:19:55.081187   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:56.082401   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:19:58.271497   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:19:58.271497   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:19:58.272384   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:00.740863   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:20:00.741070   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:01.741536   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:03.902654   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:03.902721   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:03.902721   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:06.378064   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:20:06.378064   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:07.379104   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:09.537329   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:09.537329   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:09.537541   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:12.074320   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:12.074320   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:12.074834   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:14.171613   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:14.171613   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:14.171613   13076 machine.go:93] provisionDockerMachine start ...
	I1014 07:20:14.172377   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:16.225265   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:16.225265   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:16.225634   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:18.692061   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:18.692061   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:18.698261   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:18.713495   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:18.713568   13076 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 07:20:18.839311   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 07:20:18.839517   13076 buildroot.go:166] provisioning hostname "ha-132600"
	I1014 07:20:18.839517   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:20.911254   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:20.911424   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:20.911601   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:23.370138   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:23.370756   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:23.377008   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:23.377773   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:23.377773   13076 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-132600 && echo "ha-132600" | sudo tee /etc/hostname
	I1014 07:20:23.537548   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-132600
	
	I1014 07:20:23.537548   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:25.571429   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:25.571429   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:25.572326   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:28.068677   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:28.068677   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:28.077026   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:28.078615   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:28.078615   13076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-132600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-132600/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-132600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 07:20:28.216809   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 07:20:28.216874   13076 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1014 07:20:28.216990   13076 buildroot.go:174] setting up certificates
	I1014 07:20:28.217016   13076 provision.go:84] configureAuth start
	I1014 07:20:28.217047   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:30.246758   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:30.247093   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:30.247368   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:32.688428   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:32.688428   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:32.688428   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:34.748908   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:34.748908   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:34.749349   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:37.221628   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:37.221798   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:37.221798   13076 provision.go:143] copyHostCerts
	I1014 07:20:37.221998   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1014 07:20:37.222444   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1014 07:20:37.222558   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1014 07:20:37.223110   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1014 07:20:37.224109   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1014 07:20:37.224599   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1014 07:20:37.224599   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1014 07:20:37.224959   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I1014 07:20:37.225332   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1014 07:20:37.225332   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1014 07:20:37.225332   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1014 07:20:37.226765   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1014 07:20:37.227733   13076 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-132600 san=[127.0.0.1 172.20.108.120 ha-132600 localhost minikube]
	I1014 07:20:37.731674   13076 provision.go:177] copyRemoteCerts
	I1014 07:20:37.744706   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 07:20:37.744706   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:39.801309   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:39.801309   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:39.801309   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:42.273306   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:42.273367   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:42.273367   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:20:42.378794   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6340227s)
	I1014 07:20:42.378847   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1014 07:20:42.378847   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I1014 07:20:42.424772   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1014 07:20:42.425289   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 07:20:42.471397   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1014 07:20:42.471964   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 07:20:42.527104   13076 provision.go:87] duration metric: took 14.3100395s to configureAuth
	I1014 07:20:42.527104   13076 buildroot.go:189] setting minikube options for container-runtime
	I1014 07:20:42.527712   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:20:42.528252   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:44.598308   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:44.598308   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:44.599305   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:47.040424   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:47.040637   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:47.045726   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:47.046398   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:47.046398   13076 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1014 07:20:47.167710   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1014 07:20:47.167801   13076 buildroot.go:70] root file system type: tmpfs
	I1014 07:20:47.168208   13076 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1014 07:20:47.168315   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:49.202111   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:49.202331   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:49.202424   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:51.648278   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:51.648278   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:51.654248   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:51.655052   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:51.655052   13076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1014 07:20:51.814048   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1014 07:20:51.814229   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:20:53.871643   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:20:53.871643   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:53.872030   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:20:56.286054   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:20:56.286054   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:20:56.292357   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:20:56.293125   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:20:56.293125   13076 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1014 07:20:58.458028   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1014 07:20:58.458028   13076 machine.go:96] duration metric: took 44.286361s to provisionDockerMachine
	I1014 07:20:58.458028   13076 client.go:171] duration metric: took 1m52.7353371s to LocalClient.Create
	I1014 07:20:58.458028   13076 start.go:167] duration metric: took 1m52.7353371s to libmachine.API.Create "ha-132600"
	I1014 07:20:58.458028   13076 start.go:293] postStartSetup for "ha-132600" (driver="hyperv")
	I1014 07:20:58.458028   13076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 07:20:58.471262   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 07:20:58.471262   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:00.512590   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:00.513162   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:00.513321   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:02.924795   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:02.924929   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:02.925163   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:21:03.032582   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5613141s)
	I1014 07:21:03.044373   13076 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 07:21:03.050831   13076 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 07:21:03.050831   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1014 07:21:03.051271   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1014 07:21:03.052198   13076 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> 9362.pem in /etc/ssl/certs
	I1014 07:21:03.052295   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /etc/ssl/certs/9362.pem
	I1014 07:21:03.063873   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 07:21:03.081968   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /etc/ssl/certs/9362.pem (1708 bytes)
	I1014 07:21:03.128637   13076 start.go:296] duration metric: took 4.6706031s for postStartSetup
	I1014 07:21:03.132031   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:05.196102   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:05.196622   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:05.196696   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:07.624940   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:07.625781   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:07.626103   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:21:07.629046   13076 start.go:128] duration metric: took 2m1.9103356s to createHost
	I1014 07:21:07.629046   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:09.687162   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:09.687420   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:09.687490   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:12.104556   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:12.104556   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:12.110165   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:21:12.110572   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:21:12.110572   13076 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 07:21:12.241524   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728915672.238854784
	
	I1014 07:21:12.241655   13076 fix.go:216] guest clock: 1728915672.238854784
	I1014 07:21:12.241726   13076 fix.go:229] Guest: 2024-10-14 07:21:12.238854784 -0700 PDT Remote: 2024-10-14 07:21:07.6290467 -0700 PDT m=+127.381500801 (delta=4.609808084s)
	I1014 07:21:12.241841   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:14.299338   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:14.299599   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:14.299599   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:16.767477   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:16.767665   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:16.775565   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:21:16.775565   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.108.120 22 <nil> <nil>}
	I1014 07:21:16.775565   13076 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1728915672
	I1014 07:21:16.921038   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 14 14:21:12 UTC 2024
	
	I1014 07:21:16.921096   13076 fix.go:236] clock set: Mon Oct 14 14:21:12 UTC 2024
	 (err=<nil>)
	I1014 07:21:16.921169   13076 start.go:83] releasing machines lock for "ha-132600", held for 2m11.2030498s
	I1014 07:21:16.921319   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:18.988904   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:18.989898   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:18.989947   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:21.410167   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:21.410403   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:21.414482   13076 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1014 07:21:21.414683   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:21.423844   13076 ssh_runner.go:195] Run: cat /version.json
	I1014 07:21:21.423844   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:21:23.554001   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:23.554206   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:23.554362   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:23.565882   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:21:23.565882   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:23.565882   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:21:26.092249   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:26.092249   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:26.092379   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:21:26.119019   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:21:26.119086   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:21:26.119215   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:21:26.179573   13076 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.764971s)
	W1014 07:21:26.179670   13076 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1014 07:21:26.212879   13076 ssh_runner.go:235] Completed: cat /version.json: (4.7890281s)
	I1014 07:21:26.223815   13076 ssh_runner.go:195] Run: systemctl --version
	I1014 07:21:26.243251   13076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 07:21:26.251321   13076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 07:21:26.262431   13076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 07:21:26.290027   13076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 07:21:26.290027   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:21:26.290027   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W1014 07:21:26.296743   13076 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1014 07:21:26.296743   13076 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1014 07:21:26.340326   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1014 07:21:26.370362   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1014 07:21:26.391878   13076 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 07:21:26.402847   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 07:21:26.433241   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:21:26.462619   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 07:21:26.491735   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:21:26.520404   13076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 07:21:26.550615   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 07:21:26.580278   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 07:21:26.608564   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 07:21:26.636849   13076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 07:21:26.656448   13076 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 07:21:26.667504   13076 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 07:21:26.700184   13076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 07:21:26.726834   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:26.916163   13076 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 07:21:26.953200   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:21:26.967627   13076 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1014 07:21:27.005080   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:21:27.039193   13076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 07:21:27.083701   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:21:27.118670   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:21:27.153508   13076 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 07:21:27.209689   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:21:27.230765   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:21:27.273213   13076 ssh_runner.go:195] Run: which cri-dockerd
	I1014 07:21:27.291783   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1014 07:21:27.308581   13076 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1014 07:21:27.351709   13076 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1014 07:21:27.539571   13076 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1014 07:21:27.725567   13076 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1014 07:21:27.725567   13076 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1014 07:21:27.768723   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:27.951142   13076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:21:30.486027   13076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5347969s)
	I1014 07:21:30.497722   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1014 07:21:30.531680   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 07:21:30.563742   13076 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1014 07:21:30.767218   13076 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1014 07:21:30.967093   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:31.191192   13076 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1014 07:21:31.229757   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 07:21:31.267426   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:31.462408   13076 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1014 07:21:31.569509   13076 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1014 07:21:31.581151   13076 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1014 07:21:31.591322   13076 start.go:563] Will wait 60s for crictl version
	I1014 07:21:31.604845   13076 ssh_runner.go:195] Run: which crictl
	I1014 07:21:31.622874   13076 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 07:21:31.680621   13076 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1014 07:21:31.689423   13076 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 07:21:31.732594   13076 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 07:21:31.766375   13076 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1014 07:21:31.766375   13076 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1014 07:21:31.770835   13076 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:08:65:4d Flags:up|broadcast|multicast|running}
	I1014 07:21:31.773955   13076 ip.go:214] interface addr: fe80::b548:ff79:d464:75b8/64
	I1014 07:21:31.773955   13076 ip.go:214] interface addr: 172.20.96.1/20
	I1014 07:21:31.784171   13076 ssh_runner.go:195] Run: grep 172.20.96.1	host.minikube.internal$ /etc/hosts
	I1014 07:21:31.791192   13076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 07:21:31.825649   13076 kubeadm.go:883] updating cluster {Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1
ClusterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 07:21:31.825649   13076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:21:31.834667   13076 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 07:21:31.867707   13076 docker.go:689] Got preloaded images: 
	I1014 07:21:31.867707   13076 docker.go:695] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I1014 07:21:31.880238   13076 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1014 07:21:31.908893   13076 ssh_runner.go:195] Run: which lz4
	I1014 07:21:31.914374   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1014 07:21:31.924715   13076 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 07:21:31.930846   13076 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 07:21:31.931075   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I1014 07:21:33.829320   13076 docker.go:653] duration metric: took 1.9148657s to copy over tarball
	I1014 07:21:33.840382   13076 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 07:21:42.539043   13076 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6986506s)
	I1014 07:21:42.539174   13076 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 07:21:42.611065   13076 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1014 07:21:42.636639   13076 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I1014 07:21:42.681041   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:42.879066   13076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:21:46.161713   13076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.2824935s)
	I1014 07:21:46.173131   13076 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 07:21:46.198645   13076 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1014 07:21:46.198645   13076 cache_images.go:84] Images are preloaded, skipping loading
	I1014 07:21:46.198645   13076 kubeadm.go:934] updating node { 172.20.108.120 8443 v1.31.1 docker true true} ...
	I1014 07:21:46.198645   13076 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-132600 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.108.120
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 07:21:46.209892   13076 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1014 07:21:46.283733   13076 cni.go:84] Creating CNI manager for ""
	I1014 07:21:46.283861   13076 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 07:21:46.283932   13076 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 07:21:46.284000   13076 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.108.120 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-132600 NodeName:ha-132600 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.108.120"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.108.120 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 07:21:46.284290   13076 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.108.120
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-132600"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.20.108.120"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.108.120"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 07:21:46.284367   13076 kube-vip.go:115] generating kube-vip config ...
	I1014 07:21:46.295693   13076 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1014 07:21:46.324017   13076 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1014 07:21:46.324230   13076 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.20.111.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1014 07:21:46.334184   13076 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 07:21:46.349321   13076 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 07:21:46.360771   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 07:21:46.379224   13076 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I1014 07:21:46.412080   13076 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 07:21:46.441691   13076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1014 07:21:46.479025   13076 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1014 07:21:46.519461   13076 ssh_runner.go:195] Run: grep 172.20.111.254	control-plane.minikube.internal$ /etc/hosts
	I1014 07:21:46.525021   13076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.111.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 07:21:46.554943   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:21:46.730466   13076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 07:21:46.761153   13076 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600 for IP: 172.20.108.120
	I1014 07:21:46.761153   13076 certs.go:194] generating shared ca certs ...
	I1014 07:21:46.761153   13076 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:46.761825   13076 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I1014 07:21:46.762680   13076 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I1014 07:21:46.762846   13076 certs.go:256] generating profile certs ...
	I1014 07:21:46.763605   13076 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.key
	I1014 07:21:46.763605   13076 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.crt with IP's: []
	I1014 07:21:46.955865   13076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.crt ...
	I1014 07:21:46.955865   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.crt: {Name:mk4a5e51a4b9d62279c7ef4ef47716bcdf88d704 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:46.957857   13076 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.key ...
	I1014 07:21:46.957857   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\client.key: {Name:mkc80e99cfe4d8d17198a87fb188cfa1a72d727d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:46.958881   13076 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61
	I1014 07:21:46.958881   13076 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.108.120 172.20.111.254]
	I1014 07:21:47.375660   13076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61 ...
	I1014 07:21:47.375660   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61: {Name:mke858ab0ac103663123708f9814cfdffe03211e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.377216   13076 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61 ...
	I1014 07:21:47.377216   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61: {Name:mk972fca892a193d6b4e84d42e27ed86ce9f0457 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.377811   13076 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt.59094d61 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt
	I1014 07:21:47.391806   13076 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key.59094d61 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key
	I1014 07:21:47.392823   13076 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key
	I1014 07:21:47.392823   13076 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt with IP's: []
	I1014 07:21:47.675751   13076 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt ...
	I1014 07:21:47.675751   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt: {Name:mk5a9f1239213dd0a0f36b4ee41d5ac56cbd79c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.677918   13076 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key ...
	I1014 07:21:47.677918   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key: {Name:mkb931094180fef1516b0d43acb6430ecbcbb3fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 07:21:47.678894   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 07:21:47.679943   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 07:21:47.679943   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 07:21:47.679943   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 07:21:47.691415   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 07:21:47.693468   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem (1338 bytes)
	W1014 07:21:47.693468   13076 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936_empty.pem, impossibly tiny 0 bytes
	I1014 07:21:47.694004   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1014 07:21:47.694400   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1014 07:21:47.694671   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1014 07:21:47.695091   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1014 07:21:47.695960   13076 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem (1708 bytes)
	I1014 07:21:47.696145   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem -> /usr/share/ca-certificates/936.pem
	I1014 07:21:47.696401   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /usr/share/ca-certificates/9362.pem
	I1014 07:21:47.696580   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:47.697580   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 07:21:47.742705   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 07:21:47.790988   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 07:21:47.837975   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 07:21:47.884872   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 07:21:47.936878   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 07:21:47.985330   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 07:21:48.029181   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 07:21:48.078824   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem --> /usr/share/ca-certificates/936.pem (1338 bytes)
	I1014 07:21:48.127907   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /usr/share/ca-certificates/9362.pem (1708 bytes)
	I1014 07:21:48.174542   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 07:21:48.218635   13076 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 07:21:48.264009   13076 ssh_runner.go:195] Run: openssl version
	I1014 07:21:48.282939   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 07:21:48.310256   13076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:48.318050   13076 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:44 /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:48.328768   13076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 07:21:48.348090   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 07:21:48.376484   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936.pem && ln -fs /usr/share/ca-certificates/936.pem /etc/ssl/certs/936.pem"
	I1014 07:21:48.408494   13076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936.pem
	I1014 07:21:48.416513   13076 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 14:00 /usr/share/ca-certificates/936.pem
	I1014 07:21:48.427815   13076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936.pem
	I1014 07:21:48.447033   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/936.pem /etc/ssl/certs/51391683.0"
	I1014 07:21:48.474293   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9362.pem && ln -fs /usr/share/ca-certificates/9362.pem /etc/ssl/certs/9362.pem"
	I1014 07:21:48.502293   13076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9362.pem
	I1014 07:21:48.509688   13076 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 14:00 /usr/share/ca-certificates/9362.pem
	I1014 07:21:48.519811   13076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9362.pem
	I1014 07:21:48.540403   13076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9362.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 07:21:48.569007   13076 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 07:21:48.575758   13076 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 07:21:48.576118   13076 kubeadm.go:392] StartCluster: {Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clu
sterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 07:21:48.584978   13076 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1014 07:21:48.619430   13076 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 07:21:48.647606   13076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 07:21:48.676291   13076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 07:21:48.693679   13076 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 07:21:48.693679   13076 kubeadm.go:157] found existing configuration files:
	
	I1014 07:21:48.703613   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 07:21:48.722067   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 07:21:48.732618   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 07:21:48.759836   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 07:21:48.777622   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 07:21:48.789748   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 07:21:48.818512   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 07:21:48.837396   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 07:21:48.848696   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 07:21:48.876926   13076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 07:21:48.893530   13076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 07:21:48.904372   13076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 07:21:48.920210   13076 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 07:21:49.392185   13076 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 07:22:04.140256   13076 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 07:22:04.140412   13076 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 07:22:04.140501   13076 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 07:22:04.140848   13076 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 07:22:04.141206   13076 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 07:22:04.141311   13076 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 07:22:04.144180   13076 out.go:235]   - Generating certificates and keys ...
	I1014 07:22:04.144373   13076 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 07:22:04.144575   13076 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 07:22:04.144939   13076 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 07:22:04.145172   13076 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1014 07:22:04.145315   13076 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1014 07:22:04.145521   13076 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1014 07:22:04.145845   13076 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1014 07:22:04.146212   13076 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-132600 localhost] and IPs [172.20.108.120 127.0.0.1 ::1]
	I1014 07:22:04.146365   13076 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1014 07:22:04.146614   13076 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-132600 localhost] and IPs [172.20.108.120 127.0.0.1 ::1]
	I1014 07:22:04.147003   13076 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 07:22:04.147200   13076 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 07:22:04.147236   13076 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1014 07:22:04.147236   13076 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 07:22:04.147236   13076 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 07:22:04.147236   13076 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 07:22:04.147954   13076 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 07:22:04.147987   13076 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 07:22:04.147987   13076 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 07:22:04.147987   13076 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 07:22:04.148785   13076 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 07:22:04.151235   13076 out.go:235]   - Booting up control plane ...
	I1014 07:22:04.151235   13076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 07:22:04.151819   13076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 07:22:04.152046   13076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 07:22:04.152333   13076 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 07:22:04.152333   13076 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 07:22:04.153426   13076 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 523.263843ms
	I1014 07:22:04.153534   13076 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 07:22:04.153534   13076 kubeadm.go:310] [api-check] The API server is healthy after 9.176461865s
	I1014 07:22:04.153534   13076 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 07:22:04.154243   13076 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 07:22:04.154243   13076 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 07:22:04.154779   13076 kubeadm.go:310] [mark-control-plane] Marking the node ha-132600 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 07:22:04.154779   13076 kubeadm.go:310] [bootstrap-token] Using token: rm3d8a.inqsemtt5b5l5x2e
	I1014 07:22:04.157926   13076 out.go:235]   - Configuring RBAC rules ...
	I1014 07:22:04.158760   13076 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 07:22:04.158928   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 07:22:04.158928   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 07:22:04.159656   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 07:22:04.159851   13076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 07:22:04.160089   13076 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 07:22:04.160270   13076 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 07:22:04.160529   13076 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 07:22:04.160529   13076 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 07:22:04.160529   13076 kubeadm.go:310] 
	I1014 07:22:04.160529   13076 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 07:22:04.160529   13076 kubeadm.go:310] 
	I1014 07:22:04.160529   13076 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 07:22:04.160529   13076 kubeadm.go:310] 
	I1014 07:22:04.161100   13076 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 07:22:04.161186   13076 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 07:22:04.161352   13076 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 07:22:04.161352   13076 kubeadm.go:310] 
	I1014 07:22:04.161534   13076 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 07:22:04.161534   13076 kubeadm.go:310] 
	I1014 07:22:04.161706   13076 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 07:22:04.161706   13076 kubeadm.go:310] 
	I1014 07:22:04.161950   13076 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 07:22:04.161950   13076 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 07:22:04.161950   13076 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 07:22:04.161950   13076 kubeadm.go:310] 
	I1014 07:22:04.162515   13076 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 07:22:04.162607   13076 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 07:22:04.162607   13076 kubeadm.go:310] 
	I1014 07:22:04.162607   13076 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rm3d8a.inqsemtt5b5l5x2e \
	I1014 07:22:04.162607   13076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f876f951efbe7f1ed47ca578ee0d33e6ce8fe69c1a008a8154ee00f56ac84e46 \
	I1014 07:22:04.163162   13076 kubeadm.go:310] 	--control-plane 
	I1014 07:22:04.163261   13076 kubeadm.go:310] 
	I1014 07:22:04.163340   13076 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 07:22:04.163340   13076 kubeadm.go:310] 
	I1014 07:22:04.163570   13076 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rm3d8a.inqsemtt5b5l5x2e \
	I1014 07:22:04.163570   13076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f876f951efbe7f1ed47ca578ee0d33e6ce8fe69c1a008a8154ee00f56ac84e46 
	I1014 07:22:04.163570   13076 cni.go:84] Creating CNI manager for ""
	I1014 07:22:04.163570   13076 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 07:22:04.167124   13076 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1014 07:22:04.179134   13076 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 07:22:04.187701   13076 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1014 07:22:04.187701   13076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1014 07:22:04.233053   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1014 07:22:04.912894   13076 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 07:22:04.925574   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-132600 minikube.k8s.io/updated_at=2024_10_14T07_22_04_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=ha-132600 minikube.k8s.io/primary=true
	I1014 07:22:04.927795   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:04.958241   13076 ops.go:34] apiserver oom_adj: -16
	I1014 07:22:05.168034   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:05.667048   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:06.166157   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:06.667561   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:07.167283   13076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 07:22:07.305312   13076 kubeadm.go:1113] duration metric: took 2.3924152s to wait for elevateKubeSystemPrivileges
	I1014 07:22:07.306284   13076 kubeadm.go:394] duration metric: took 18.730142s to StartCluster
	I1014 07:22:07.306284   13076 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:22:07.306284   13076 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:22:07.308077   13076 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 07:22:07.309777   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 07:22:07.309777   13076 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:22:07.309953   13076 start.go:241] waiting for startup goroutines ...
	I1014 07:22:07.309868   13076 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 07:22:07.310143   13076 addons.go:69] Setting storage-provisioner=true in profile "ha-132600"
	I1014 07:22:07.310235   13076 addons.go:234] Setting addon storage-provisioner=true in "ha-132600"
	I1014 07:22:07.310143   13076 addons.go:69] Setting default-storageclass=true in profile "ha-132600"
	I1014 07:22:07.310402   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:22:07.310402   13076 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-132600"
	I1014 07:22:07.310402   13076 host.go:66] Checking if "ha-132600" exists ...
	I1014 07:22:07.311457   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:07.311926   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:07.495748   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.96.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 07:22:08.130501   13076 start.go:971] {"host.minikube.internal": 172.20.96.1} host record injected into CoreDNS's ConfigMap
	I1014 07:22:09.594501   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:09.594501   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:09.594786   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:09.594878   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:09.595740   13076 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:22:09.596691   13076 kapi.go:59] client config for ha-132600: &rest.Config{Host:"https://172.20.111.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-132600\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\ha-132600\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2926ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 07:22:09.597939   13076 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 07:22:09.598267   13076 cert_rotation.go:140] Starting client certificate rotation controller
	I1014 07:22:09.598703   13076 addons.go:234] Setting addon default-storageclass=true in "ha-132600"
	I1014 07:22:09.598956   13076 host.go:66] Checking if "ha-132600" exists ...
	I1014 07:22:09.600143   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:09.600143   13076 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 07:22:09.600247   13076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 07:22:09.600352   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:11.901872   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:11.901872   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:11.902855   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:22:11.913525   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:11.914554   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:11.914725   13076 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 07:22:11.914841   13076 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 07:22:11.915002   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600 ).state
	I1014 07:22:14.163865   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:14.163865   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:14.163865   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600 ).networkadapters[0]).ipaddresses[0]
	I1014 07:22:14.597752   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:22:14.597752   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:14.598273   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:22:14.755906   13076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 07:22:16.766331   13076 main.go:141] libmachine: [stdout =====>] : 172.20.108.120
	
	I1014 07:22:16.766798   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:16.767079   13076 sshutil.go:53] new ssh client: &{IP:172.20.108.120 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600\id_rsa Username:docker}
	I1014 07:22:16.907784   13076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 07:22:17.107355   13076 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 07:22:17.107355   13076 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 07:22:17.108322   13076 round_trippers.go:463] GET https://172.20.111.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1014 07:22:17.108322   13076 round_trippers.go:469] Request Headers:
	I1014 07:22:17.108322   13076 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:22:17.108322   13076 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:22:17.120122   13076 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1014 07:22:17.120927   13076 round_trippers.go:463] PUT https://172.20.111.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1014 07:22:17.121006   13076 round_trippers.go:469] Request Headers:
	I1014 07:22:17.121006   13076 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 07:22:17.121006   13076 round_trippers.go:473]     Accept: application/json, */*
	I1014 07:22:17.121089   13076 round_trippers.go:473]     Content-Type: application/json
	I1014 07:22:17.124876   13076 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 07:22:17.128782   13076 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1014 07:22:17.133792   13076 addons.go:510] duration metric: took 9.8239112s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1014 07:22:17.133792   13076 start.go:246] waiting for cluster config update ...
	I1014 07:22:17.133792   13076 start.go:255] writing updated cluster config ...
	I1014 07:22:17.140793   13076 out.go:201] 
	I1014 07:22:17.150794   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:22:17.151784   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:22:17.157758   13076 out.go:177] * Starting "ha-132600-m02" control-plane node in "ha-132600" cluster
	I1014 07:22:17.160381   13076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 07:22:17.160381   13076 cache.go:56] Caching tarball of preloaded images
	I1014 07:22:17.160381   13076 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1014 07:22:17.160381   13076 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 07:22:17.160381   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:22:17.168672   13076 start.go:360] acquireMachinesLock for ha-132600-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 07:22:17.169194   13076 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-132600-m02"
	I1014 07:22:17.169334   13076 start.go:93] Provisioning new machine with config: &{Name:ha-132600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:ha-132600 Namespace:default APIServerHAVIP:172.20.111.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.108.120 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 07:22:17.169334   13076 start.go:125] createHost starting for "m02" (driver="hyperv")
	I1014 07:22:17.171957   13076 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 07:22:17.171957   13076 start.go:159] libmachine.API.Create for "ha-132600" (driver="hyperv")
	I1014 07:22:17.171957   13076 client.go:168] LocalClient.Create starting
	I1014 07:22:17.171957   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I1014 07:22:17.173148   13076 main.go:141] libmachine: Decoding PEM data...
	I1014 07:22:17.173802   13076 main.go:141] libmachine: Parsing certificate...
	I1014 07:22:17.173802   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1014 07:22:19.094383   13076 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1014 07:22:19.094383   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:19.094970   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1014 07:22:20.808796   13076 main.go:141] libmachine: [stdout =====>] : False
	
	I1014 07:22:20.808796   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:20.808897   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:22:22.292959   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:22:22.292959   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:22.293074   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:22:25.811891   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:22:25.811891   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:25.813583   13076 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 07:22:26.334218   13076 main.go:141] libmachine: Creating SSH key...
	I1014 07:22:26.579833   13076 main.go:141] libmachine: Creating VM...
	I1014 07:22:26.580305   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 07:22:29.402237   13076 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 07:22:29.402237   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:29.402237   13076 main.go:141] libmachine: Using switch "Default Switch"
	I1014 07:22:29.402237   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 07:22:31.181025   13076 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 07:22:31.181025   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:31.181025   13076 main.go:141] libmachine: Creating VHD
	I1014 07:22:31.181025   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I1014 07:22:34.986705   13076 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 9BEBB030-6569-4A5D-A43D-C976E242B24D
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1014 07:22:34.986953   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:34.986953   13076 main.go:141] libmachine: Writing magic tar header
	I1014 07:22:34.986953   13076 main.go:141] libmachine: Writing SSH key tar header
	I1014 07:22:34.998864   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I1014 07:22:38.117497   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:38.117497   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:38.117497   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\disk.vhd' -SizeBytes 20000MB
	I1014 07:22:40.750048   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:40.750048   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:40.750048   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-132600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1014 07:22:44.293371   13076 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-132600-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1014 07:22:44.294009   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:44.294158   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-132600-m02 -DynamicMemoryEnabled $false
	I1014 07:22:46.495846   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:46.496106   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:46.496211   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-132600-m02 -Count 2
	I1014 07:22:48.637702   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:48.637702   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:48.637702   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-132600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\boot2docker.iso'
	I1014 07:22:51.128606   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:51.128606   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:51.128606   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-132600-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\disk.vhd'
	I1014 07:22:53.765202   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:53.765775   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:53.765775   13076 main.go:141] libmachine: Starting VM...
	I1014 07:22:53.765847   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-132600-m02
	I1014 07:22:56.961396   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:22:56.961396   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:56.962040   13076 main.go:141] libmachine: Waiting for host to start...
	I1014 07:22:56.962040   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:22:59.251592   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:22:59.251592   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:22:59.251838   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:01.758067   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:01.758159   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:02.758293   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:04.921490   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:04.921565   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:04.921802   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:07.441595   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:07.441595   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:08.443053   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:10.659721   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:10.659721   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:10.660418   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:13.126618   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:13.126952   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:14.127183   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:16.307414   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:16.307414   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:16.307500   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:18.748902   13076 main.go:141] libmachine: [stdout =====>] : 
	I1014 07:23:18.749678   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:19.749962   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:21.893323   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:21.893323   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:21.893323   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:24.464351   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:24.465392   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:24.465471   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:26.567626   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:26.567626   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:26.567626   13076 machine.go:93] provisionDockerMachine start ...
	I1014 07:23:26.568400   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:28.681591   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:28.681591   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:28.682618   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:31.178308   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:31.178308   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:31.192309   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:31.207523   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:31.208024   13076 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 07:23:31.337861   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 07:23:31.337861   13076 buildroot.go:166] provisioning hostname "ha-132600-m02"
	I1014 07:23:31.337941   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:33.402839   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:33.402839   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:33.403340   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:35.894312   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:35.894312   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:35.901210   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:35.901699   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:35.901823   13076 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-132600-m02 && echo "ha-132600-m02" | sudo tee /etc/hostname
	I1014 07:23:36.071040   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-132600-m02
	
	I1014 07:23:36.071040   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:38.150707   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:38.151729   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:38.151729   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:40.639242   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:40.639242   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:40.644598   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:40.645419   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:40.645490   13076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-132600-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-132600-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-132600-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 07:23:40.801974   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 07:23:40.801974   13076 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1014 07:23:40.801974   13076 buildroot.go:174] setting up certificates
	I1014 07:23:40.801974   13076 provision.go:84] configureAuth start
	I1014 07:23:40.801974   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:42.869101   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:42.869101   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:42.869101   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:45.382211   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:45.382688   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:45.382688   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:47.501182   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:47.501778   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:47.501832   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:50.008838   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:50.008838   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:50.009193   13076 provision.go:143] copyHostCerts
	I1014 07:23:50.009346   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1014 07:23:50.009346   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1014 07:23:50.009346   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1014 07:23:50.010015   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1014 07:23:50.011395   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1014 07:23:50.011967   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1014 07:23:50.011967   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1014 07:23:50.011967   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1014 07:23:50.013212   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1014 07:23:50.014109   13076 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1014 07:23:50.014109   13076 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1014 07:23:50.014507   13076 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I1014 07:23:50.015649   13076 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-132600-m02 san=[127.0.0.1 172.20.111.83 ha-132600-m02 localhost minikube]
	I1014 07:23:50.130152   13076 provision.go:177] copyRemoteCerts
	I1014 07:23:50.140319   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 07:23:50.140319   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:52.242533   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:52.242533   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:52.243067   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:54.757999   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:54.758132   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:54.758132   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:23:54.867658   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7273333s)
	I1014 07:23:54.867658   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1014 07:23:54.867658   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 07:23:54.915126   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1014 07:23:54.915808   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 07:23:54.962775   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1014 07:23:54.963449   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 07:23:55.012733   13076 provision.go:87] duration metric: took 14.2107408s to configureAuth
	I1014 07:23:55.012733   13076 buildroot.go:189] setting minikube options for container-runtime
	I1014 07:23:55.013734   13076 config.go:182] Loaded profile config "ha-132600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:23:55.013734   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:23:57.174688   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:23:57.174688   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:57.175736   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:23:59.668037   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:23:59.668597   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:23:59.675074   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:23:59.675608   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:23:59.675873   13076 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1014 07:23:59.826626   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1014 07:23:59.826626   13076 buildroot.go:70] root file system type: tmpfs
	I1014 07:23:59.826926   13076 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1014 07:23:59.827038   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:01.963890   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:01.964568   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:01.964568   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:04.515125   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:04.515943   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:04.521824   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:04.522248   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:04.522248   13076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.20.108.120"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1014 07:24:04.691427   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.20.108.120
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1014 07:24:04.692090   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:06.819823   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:06.820339   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:06.820339   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:09.321630   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:09.321630   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:09.326748   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:09.327748   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:09.327808   13076 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1014 07:24:11.561493   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1014 07:24:11.561493   13076 machine.go:96] duration metric: took 44.993809s to provisionDockerMachine
	I1014 07:24:11.561493   13076 client.go:171] duration metric: took 1m54.3893961s to LocalClient.Create
	I1014 07:24:11.561493   13076 start.go:167] duration metric: took 1m54.3893961s to libmachine.API.Create "ha-132600"
	I1014 07:24:11.561493   13076 start.go:293] postStartSetup for "ha-132600-m02" (driver="hyperv")
	I1014 07:24:11.561493   13076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 07:24:11.572490   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 07:24:11.572490   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:13.652544   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:13.653401   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:13.653599   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:16.188638   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:16.188638   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:16.189304   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:24:16.298812   13076 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7262867s)
	I1014 07:24:16.310231   13076 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 07:24:16.316927   13076 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 07:24:16.316927   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1014 07:24:16.317465   13076 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1014 07:24:16.318548   13076 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> 9362.pem in /etc/ssl/certs
	I1014 07:24:16.318548   13076 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /etc/ssl/certs/9362.pem
	I1014 07:24:16.332285   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 07:24:16.351469   13076 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /etc/ssl/certs/9362.pem (1708 bytes)
	I1014 07:24:16.401820   13076 start.go:296] duration metric: took 4.8403207s for postStartSetup
	I1014 07:24:16.404096   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:18.500218   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:18.501234   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:18.501234   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:20.974982   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:20.975125   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:20.975125   13076 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ha-132600\config.json ...
	I1014 07:24:20.978124   13076 start.go:128] duration metric: took 2m3.8086365s to createHost
	I1014 07:24:20.978209   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:23.112937   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:23.113001   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:23.113001   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:25.636675   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:25.637206   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:25.641953   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:25.642670   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:25.642670   13076 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 07:24:25.768945   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728915865.766702127
	
	I1014 07:24:25.769174   13076 fix.go:216] guest clock: 1728915865.766702127
	I1014 07:24:25.769174   13076 fix.go:229] Guest: 2024-10-14 07:24:25.766702127 -0700 PDT Remote: 2024-10-14 07:24:20.978124 -0700 PDT m=+320.730336401 (delta=4.788578127s)
	I1014 07:24:25.769174   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:27.845838   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:27.845838   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:27.845838   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:30.364175   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:30.364263   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:30.369382   13076 main.go:141] libmachine: Using SSH client type: native
	I1014 07:24:30.369967   13076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.111.83 22 <nil> <nil>}
	I1014 07:24:30.370049   13076 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1728915865
	I1014 07:24:30.508807   13076 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 14 14:24:25 UTC 2024
	
	I1014 07:24:30.508807   13076 fix.go:236] clock set: Mon Oct 14 14:24:25 UTC 2024
	 (err=<nil>)
	I1014 07:24:30.508807   13076 start.go:83] releasing machines lock for "ha-132600-m02", held for 2m13.3394475s
	I1014 07:24:30.508807   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:32.570516   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:32.570516   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:32.570516   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:35.081338   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:35.081338   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:35.084330   13076 out.go:177] * Found network options:
	I1014 07:24:35.087197   13076 out.go:177]   - NO_PROXY=172.20.108.120
	W1014 07:24:35.089531   13076 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 07:24:35.091879   13076 out.go:177]   - NO_PROXY=172.20.108.120
	W1014 07:24:35.094409   13076 proxy.go:119] fail to check proxy env: Error ip not in block
	W1014 07:24:35.095622   13076 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 07:24:35.098960   13076 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1014 07:24:35.099124   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:35.109488   13076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1014 07:24:35.109488   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-132600-m02 ).state
	I1014 07:24:37.293745   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:37.293818   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:37.293870   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:37.344951   13076 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 07:24:37.344951   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:37.344951   13076 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-132600-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 07:24:39.933767   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:39.934139   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:39.934445   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:24:39.999458   13076 main.go:141] libmachine: [stdout =====>] : 172.20.111.83
	
	I1014 07:24:39.999458   13076 main.go:141] libmachine: [stderr =====>] : 
	I1014 07:24:39.999458   13076 sshutil.go:53] new ssh client: &{IP:172.20.111.83 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\ha-132600-m02\id_rsa Username:docker}
	I1014 07:24:40.039802   13076 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.930308s)
	W1014 07:24:40.039802   13076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 07:24:40.051889   13076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 07:24:40.057601   13076 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9585844s)
	W1014 07:24:40.057601   13076 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1014 07:24:40.090774   13076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 07:24:40.090774   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:24:40.091315   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:24:40.139757   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1014 07:24:40.172033   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1014 07:24:40.178804   13076 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1014 07:24:40.178804   13076 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1014 07:24:40.194040   13076 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 07:24:40.209169   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 07:24:40.241042   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:24:40.273436   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 07:24:40.304842   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 07:24:40.336705   13076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 07:24:40.371635   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 07:24:40.404378   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 07:24:40.435805   13076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 07:24:40.485319   13076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 07:24:40.505473   13076 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 07:24:40.516975   13076 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 07:24:40.550905   13076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 07:24:40.581632   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:24:40.785773   13076 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 07:24:40.819978   13076 start.go:495] detecting cgroup driver to use...
	I1014 07:24:40.830992   13076 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1014 07:24:40.876801   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:24:40.913930   13076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 07:24:40.966444   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 07:24:41.006432   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:24:41.044531   13076 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 07:24:41.114617   13076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 07:24:41.139995   13076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 07:24:41.187779   13076 ssh_runner.go:195] Run: which cri-dockerd
	I1014 07:24:41.207367   13076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1014 07:24:41.226988   13076 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1014 07:24:41.272924   13076 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1014 07:24:41.470204   13076 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1014 07:24:41.650925   13076 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1014 07:24:41.650925   13076 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1014 07:24:41.694417   13076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 07:24:41.893295   13076 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 07:25:43.008408   13076 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.115034s)
	I1014 07:25:43.020423   13076 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1014 07:25:43.059257   13076 out.go:201] 
	W1014 07:25:43.062124   13076 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 14 14:24:09 ha-132600-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.938489944Z" level=info msg="Starting up"
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.941531731Z" level=info msg="containerd not running, starting managed containerd"
	Oct 14 14:24:09 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:09.942638427Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=671
	Oct 14 14:24:09 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:09.976195986Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.003978070Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004049170Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004119469Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004233069Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004318268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004457268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004664767Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004761167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004782067Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004795467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.004887666Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.005331964Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008453752Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008545052Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008673551Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008759151Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008859951Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.008988550Z" level=info msg="metadata content store policy set" policy=shared
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034095951Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034335550Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034407650Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034432250Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034449450Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034603849Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.034945948Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035129847Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035263947Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035294447Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035312846Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035328546Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035343646Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035419846Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035443346Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035495946Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035526946Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035542746Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035565345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035583545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035598745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035630745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035648245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035663945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035689645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035709545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035731545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035750545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035764245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035777845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035792145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035809345Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035833944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035849444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.035864444Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036025444Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036057644Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036074443Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036087843Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036099043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036115343Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036131843Z" level=info msg="NRI interface is disabled by configuration."
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036482842Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036737141Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036799541Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 14 14:24:10 ha-132600-m02 dockerd[671]: time="2024-10-14T14:24:10.036822041Z" level=info msg="containerd successfully booted in 0.062283s"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.009541514Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.046546477Z" level=info msg="Loading containers: start."
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.205429391Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.426461627Z" level=info msg="Loading containers: done."
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448023414Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448125314Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448200613Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.448511212Z" level=info msg="Daemon has completed initialization"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.556153647Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 14 14:24:11 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:11.556338446Z" level=info msg="API listen on [::]:2376"
	Oct 14 14:24:11 ha-132600-m02 systemd[1]: Started Docker Application Container Engine.
	Oct 14 14:24:41 ha-132600-m02 systemd[1]: Stopping Docker Application Container Engine...
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.916201367Z" level=info msg="Processing signal 'terminated'"
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.918697964Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919058063Z" level=info msg="Daemon shutdown complete"
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919215663Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 14 14:24:41 ha-132600-m02 dockerd[665]: time="2024-10-14T14:24:41.919252063Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: docker.service: Deactivated successfully.
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: Stopped Docker Application Container Engine.
	Oct 14 14:24:42 ha-132600-m02 systemd[1]: Starting Docker Application Container Engine...
	Oct 14 14:24:42 ha-132600-m02 dockerd[1075]: time="2024-10-14T14:24:42.978494717Z" level=info msg="Starting up"
	Oct 14 14:25:43 ha-132600-m02 dockerd[1075]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 14 14:25:43 ha-132600-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1014 07:25:43.062124   13076 out.go:270] * 
	W1014 07:25:43.062827   13076 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 07:25:43.067027   13076 out.go:201] 
	
	
	==> Docker <==
	Oct 14 14:22:32 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:22:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d7137fb8ec569089599a03e11d18eda509d210d29fa54b5e6a0a8d7dd7a54e7f/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 14:22:32 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:22:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/90ffeb0ce1aafcb639832f7144bd2dbb0b5c9deb53555b49e386b3d3e96d8bc3/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 14:22:32 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:22:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7278ecf7cc9466e1fee2170d699055f6498d2ece8da31c4b3696ed85b3cd38cf/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922383735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922463635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922481235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.922580735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.996609949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.996807948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.996907948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:32 ha-132600 dockerd[1431]: time="2024-10-14T14:22:32.998275745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.069443031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.070022429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.070369527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:22:33 ha-132600 dockerd[1431]: time="2024-10-14T14:22:33.076841098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675114485Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675249284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675264584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:17 ha-132600 dockerd[1431]: time="2024-10-14T14:26:17.675974482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:17 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:26:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/91751edf004eb5236aa2443a5cff30bdc82ba05d39a3583d9026a4f19faba52f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 14 14:26:19 ha-132600 cri-dockerd[1324]: time="2024-10-14T14:26:19Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.456978196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.457173796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.457616296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 14:26:19 ha-132600 dockerd[1431]: time="2024-10-14T14:26:19.457980197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2b8d44d586369       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   24 minutes ago      Running             busybox                   0                   91751edf004eb       busybox-7dff88458-kr92j
	5a6196684fc6e       c69fa2e9cbf5f                                                                                         27 minutes ago      Running             coredns                   0                   7278ecf7cc946       coredns-7c65d6cfc9-4qfrq
	81d6fdac8115f       6e38f40d628db                                                                                         27 minutes ago      Running             storage-provisioner       0                   90ffeb0ce1aaf       storage-provisioner
	52c3e5370a6c8       c69fa2e9cbf5f                                                                                         27 minutes ago      Running             coredns                   0                   d7137fb8ec569       coredns-7c65d6cfc9-zf6cd
	dae2c0aa67af3       kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387              28 minutes ago      Running             kindnet-cni               0                   277e64b59b8aa       kindnet-rkjqr
	4745a4b0dc379       60c005f310ff3                                                                                         28 minutes ago      Running             kube-proxy                0                   15f605d31df67       kube-proxy-zkbj8
	b08f7a3c2a5a6       ghcr.io/kube-vip/kube-vip@sha256:9e23baad11ae3e69d739430b9fdb60df22356b7da4b4f4e458fae0541619deb4     28 minutes ago      Running             kube-vip                  0                   46cfd7b278d40       kube-vip-ha-132600
	84fd76d493d80       2e96e5913fc06                                                                                         28 minutes ago      Running             etcd                      0                   d71566d00e2f9       etcd-ha-132600
	b661cb6713103       6bab7719df100                                                                                         28 minutes ago      Running             kube-apiserver            0                   cb18be6165830       kube-apiserver-ha-132600
	35c870864a800       9aa1fad941575                                                                                         28 minutes ago      Running             kube-scheduler            0                   36593363b631a       kube-scheduler-ha-132600
	4a8cce31aa3a2       175ffd71cce3d                                                                                         28 minutes ago      Running             kube-controller-manager   0                   f5fa69fe68b07       kube-controller-manager-ha-132600
	
	
	==> coredns [52c3e5370a6c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41885 - 43638 "HINFO IN 5147527986633681541.7511835962423692488. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.048565283s
	[INFO] 10.244.0.4:49801 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0004036s
	[INFO] 10.244.0.4:57121 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.178873632s
	[INFO] 10.244.0.4:54985 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000301s
	[INFO] 10.244.0.4:47603 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000273799s
	[INFO] 10.244.0.4:50030 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000276399s
	[INFO] 10.244.0.4:38124 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000439399s
	[INFO] 10.244.0.4:43764 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000347699s
	[INFO] 10.244.0.4:53583 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0003446s
	[INFO] 10.244.0.4:50771 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0002502s
	[INFO] 10.244.0.4:32805 - 5 "PTR IN 1.96.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0002399s
	
	
	==> coredns [5a6196684fc6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:50464 - 22593 "HINFO IN 8090798371432773748.7872157757733337044. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.084180423s
	[INFO] 10.244.0.4:46373 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.004105792s
	[INFO] 10.244.0.4:41175 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.166814894s
	[INFO] 10.244.0.4:43040 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002867s
	[INFO] 10.244.0.4:33549 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.045685805s
	[INFO] 10.244.0.4:60793 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000884598s
	[INFO] 10.244.0.4:57558 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0001339s
	[INFO] 10.244.0.4:33138 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.040147217s
	[INFO] 10.244.0.4:46303 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0002673s
	[INFO] 10.244.0.4:60761 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001771s
	[INFO] 10.244.0.4:55567 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000249599s
	
	
	==> describe nodes <==
	Name:               ha-132600
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-132600
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=ha-132600
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_14T07_22_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 14:22:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-132600
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:50:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 14:47:05 +0000   Mon, 14 Oct 2024 14:22:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 14:47:05 +0000   Mon, 14 Oct 2024 14:22:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 14:47:05 +0000   Mon, 14 Oct 2024 14:22:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 14:47:05 +0000   Mon, 14 Oct 2024 14:22:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.108.120
	  Hostname:    ha-132600
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 07bcc58a0c7e4f0eaae96253dcf0a4ae
	  System UUID:                e06d00fc-5ebc-1a49-bf31-4c6ce500fe9c
	  Boot ID:                    215d765d-07e3-4bb7-ba3c-58ab142d5afa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-kr92j              0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 coredns-7c65d6cfc9-4qfrq             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 coredns-7c65d6cfc9-zf6cd             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-ha-132600                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kindnet-rkjqr                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      28m
	  kube-system                 kube-apiserver-ha-132600             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-ha-132600    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-zkbj8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-ha-132600             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-vip-ha-132600                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 28m   kube-proxy       
	  Normal  Starting                 28m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m   kubelet          Node ha-132600 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m   kubelet          Node ha-132600 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m   kubelet          Node ha-132600 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28m   node-controller  Node ha-132600 event: Registered Node ha-132600 in Controller
	  Normal  NodeReady                27m   kubelet          Node ha-132600 status is now: NodeReady
	
	
	Name:               ha-132600-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-132600-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=ha-132600
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_14T07_42_14_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 14:42:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-132600-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:50:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 14:48:21 +0000   Mon, 14 Oct 2024 14:42:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 14:48:21 +0000   Mon, 14 Oct 2024 14:42:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 14:48:21 +0000   Mon, 14 Oct 2024 14:42:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 14:48:21 +0000   Mon, 14 Oct 2024 14:42:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.111.174
	  Hostname:    ha-132600-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 3833a05e76bc4842ac73a1d2223670ec
	  System UUID:                f1d27ade-116b-0642-ac95-727d62870b2a
	  Boot ID:                    07765716-ab52-4f7d-8d78-8fbd996c8cc0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8thz6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kindnet-dznf8              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m5s
	  kube-system                 kube-proxy-q6wxd           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 7m52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  8m6s (x2 over 8m6s)  kubelet          Node ha-132600-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  CIDRAssignmentFailed     8m5s                 cidrAllocator    Node ha-132600-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasNoDiskPressure    8m5s (x2 over 8m6s)  kubelet          Node ha-132600-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m5s (x2 over 8m6s)  kubelet          Node ha-132600-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m2s                 node-controller  Node ha-132600-m03 event: Registered Node ha-132600-m03 in Controller
	  Normal  NodeReady                7m33s                kubelet          Node ha-132600-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.050916] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +45.956887] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.179434] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[Oct14 14:21] systemd-fstab-generator[997]: Ignoring "noauto" option for root device
	[  +0.095957] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.527367] systemd-fstab-generator[1037]: Ignoring "noauto" option for root device
	[  +0.185829] systemd-fstab-generator[1049]: Ignoring "noauto" option for root device
	[  +0.220100] systemd-fstab-generator[1063]: Ignoring "noauto" option for root device
	[  +2.802946] systemd-fstab-generator[1277]: Ignoring "noauto" option for root device
	[  +0.209648] systemd-fstab-generator[1289]: Ignoring "noauto" option for root device
	[  +0.207153] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.288223] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[ +11.411364] systemd-fstab-generator[1416]: Ignoring "noauto" option for root device
	[  +0.107418] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.752049] systemd-fstab-generator[1674]: Ignoring "noauto" option for root device
	[  +6.264384] systemd-fstab-generator[1823]: Ignoring "noauto" option for root device
	[  +0.102334] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.637673] kauditd_printk_skb: 67 callbacks suppressed
	[Oct14 14:22] systemd-fstab-generator[2317]: Ignoring "noauto" option for root device
	[  +5.137329] kauditd_printk_skb: 17 callbacks suppressed
	[  +8.104433] kauditd_printk_skb: 29 callbacks suppressed
	[Oct14 14:26] kauditd_printk_skb: 28 callbacks suppressed
	[Oct14 14:42] hrtimer: interrupt took 6612884 ns
	
	
	==> etcd [84fd76d493d8] <==
	{"level":"warn","ts":"2024-10-14T14:42:06.838564Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"211.106689ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T14:42:06.838765Z","caller":"traceutil/trace.go:171","msg":"trace[744257141] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2578; }","duration":"211.373088ms","start":"2024-10-14T14:42:06.627381Z","end":"2024-10-14T14:42:06.838754Z","steps":["trace[744257141] 'agreement among raft nodes before linearized reading'  (duration: 210.908189ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:07.474608Z","caller":"traceutil/trace.go:171","msg":"trace[1611887997] linearizableReadLoop","detail":"{readStateIndex:2842; appliedIndex:2841; }","duration":"173.05738ms","start":"2024-10-14T14:42:07.301532Z","end":"2024-10-14T14:42:07.474589Z","steps":["trace[1611887997] 'read index received'  (duration: 172.879381ms)","trace[1611887997] 'applied index is now lower than readState.Index'  (duration: 177.299µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-14T14:42:07.474817Z","caller":"traceutil/trace.go:171","msg":"trace[1935321534] transaction","detail":"{read_only:false; response_revision:2579; number_of_response:1; }","duration":"268.831248ms","start":"2024-10-14T14:42:07.205940Z","end":"2024-10-14T14:42:07.474771Z","steps":["trace[1935321534] 'process raft request'  (duration: 268.413149ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T14:42:07.475266Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.720679ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T14:42:07.475322Z","caller":"traceutil/trace.go:171","msg":"trace[213807606] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2579; }","duration":"173.789679ms","start":"2024-10-14T14:42:07.301524Z","end":"2024-10-14T14:42:07.475314Z","steps":["trace[213807606] 'agreement among raft nodes before linearized reading'  (duration: 173.704979ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:19.018144Z","caller":"traceutil/trace.go:171","msg":"trace[410087756] transaction","detail":"{read_only:false; response_revision:2632; number_of_response:1; }","duration":"175.067373ms","start":"2024-10-14T14:42:18.843055Z","end":"2024-10-14T14:42:19.018123Z","steps":["trace[410087756] 'process raft request'  (duration: 174.514675ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T14:42:19.375954Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.591721ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-132600-m03\" ","response":"range_response_count:1 size:2962"}
	{"level":"info","ts":"2024-10-14T14:42:19.377451Z","caller":"traceutil/trace.go:171","msg":"trace[946791809] range","detail":"{range_begin:/registry/minions/ha-132600-m03; range_end:; response_count:1; response_revision:2632; }","duration":"239.116617ms","start":"2024-10-14T14:42:19.138318Z","end":"2024-10-14T14:42:19.377435Z","steps":["trace[946791809] 'range keys from in-memory index tree'  (duration: 237.374121ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:25.277317Z","caller":"traceutil/trace.go:171","msg":"trace[682053374] linearizableReadLoop","detail":"{readStateIndex:2917; appliedIndex:2916; }","duration":"140.762656ms","start":"2024-10-14T14:42:25.136515Z","end":"2024-10-14T14:42:25.277278Z","steps":["trace[682053374] 'read index received'  (duration: 140.586757ms)","trace[682053374] 'applied index is now lower than readState.Index'  (duration: 175.299µs)"],"step_count":2}
	{"level":"warn","ts":"2024-10-14T14:42:25.277542Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.959056ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-132600-m03\" ","response":"range_response_count:1 size:3114"}
	{"level":"info","ts":"2024-10-14T14:42:25.277634Z","caller":"traceutil/trace.go:171","msg":"trace[820348600] range","detail":"{range_begin:/registry/minions/ha-132600-m03; range_end:; response_count:1; response_revision:2648; }","duration":"141.112155ms","start":"2024-10-14T14:42:25.136511Z","end":"2024-10-14T14:42:25.277623Z","steps":["trace[820348600] 'agreement among raft nodes before linearized reading'  (duration: 140.928056ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:25.279675Z","caller":"traceutil/trace.go:171","msg":"trace[858523099] transaction","detail":"{read_only:false; response_revision:2648; number_of_response:1; }","duration":"166.194393ms","start":"2024-10-14T14:42:25.113464Z","end":"2024-10-14T14:42:25.279658Z","steps":["trace[858523099] 'process raft request'  (duration: 163.709399ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:25.921423Z","caller":"traceutil/trace.go:171","msg":"trace[45131034] transaction","detail":"{read_only:false; response_revision:2649; number_of_response:1; }","duration":"240.489012ms","start":"2024-10-14T14:42:25.680919Z","end":"2024-10-14T14:42:25.921408Z","steps":["trace[45131034] 'process raft request'  (duration: 240.146113ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:31.076106Z","caller":"traceutil/trace.go:171","msg":"trace[1663891298] transaction","detail":"{read_only:false; response_revision:2663; number_of_response:1; }","duration":"120.948703ms","start":"2024-10-14T14:42:30.955138Z","end":"2024-10-14T14:42:31.076086Z","steps":["trace[1663891298] 'process raft request'  (duration: 120.791304ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T14:42:31.287943Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.945532ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-132600-m03\" ","response":"range_response_count:1 size:3114"}
	{"level":"info","ts":"2024-10-14T14:42:31.288066Z","caller":"traceutil/trace.go:171","msg":"trace[1149000350] range","detail":"{range_begin:/registry/minions/ha-132600-m03; range_end:; response_count:1; response_revision:2663; }","duration":"150.051932ms","start":"2024-10-14T14:42:31.137978Z","end":"2024-10-14T14:42:31.288030Z","steps":["trace[1149000350] 'range keys from in-memory index tree'  (duration: 149.753933ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:42:32.313366Z","caller":"traceutil/trace.go:171","msg":"trace[1148264888] linearizableReadLoop","detail":"{readStateIndex:2939; appliedIndex:2938; }","duration":"175.689169ms","start":"2024-10-14T14:42:32.137533Z","end":"2024-10-14T14:42:32.313222Z","steps":["trace[1148264888] 'read index received'  (duration: 175.27797ms)","trace[1148264888] 'applied index is now lower than readState.Index'  (duration: 410.299µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-14T14:42:32.313766Z","caller":"traceutil/trace.go:171","msg":"trace[53442055] transaction","detail":"{read_only:false; response_revision:2667; number_of_response:1; }","duration":"341.166363ms","start":"2024-10-14T14:42:31.972588Z","end":"2024-10-14T14:42:32.313754Z","steps":["trace[53442055] 'process raft request'  (duration: 340.380665ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T14:42:32.314054Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-14T14:42:31.972576Z","time spent":"341.263763ms","remote":"127.0.0.1:55456","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1094,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:2661 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1021 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-14T14:42:32.314975Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.510264ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-132600-m03\" ","response":"range_response_count:1 size:3114"}
	{"level":"info","ts":"2024-10-14T14:42:32.315174Z","caller":"traceutil/trace.go:171","msg":"trace[597120152] range","detail":"{range_begin:/registry/minions/ha-132600-m03; range_end:; response_count:1; response_revision:2667; }","duration":"177.711264ms","start":"2024-10-14T14:42:32.137452Z","end":"2024-10-14T14:42:32.315163Z","steps":["trace[597120152] 'agreement among raft nodes before linearized reading'  (duration: 177.444264ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T14:46:56.596123Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2559}
	{"level":"info","ts":"2024-10-14T14:46:56.611577Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2559,"took":"14.540461ms","hash":3178179299,"current-db-size-bytes":2416640,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2015232,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-10-14T14:46:56.611712Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3178179299,"revision":2559,"compact-revision":2024}
	
	
	==> kernel <==
	 14:50:19 up 30 min,  0 users,  load average: 0.17, 0.43, 0.38
	Linux ha-132600 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [dae2c0aa67af] <==
	I1014 14:49:17.563932       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:49:27.572780       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:49:27.572912       1 main.go:300] handling current node
	I1014 14:49:27.573003       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:49:27.573396       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:49:37.565129       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:49:37.565277       1 main.go:300] handling current node
	I1014 14:49:37.565299       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:49:37.565356       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:49:47.564739       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:49:47.565022       1 main.go:300] handling current node
	I1014 14:49:47.565060       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:49:47.565070       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:49:57.572228       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:49:57.572264       1 main.go:300] handling current node
	I1014 14:49:57.572282       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:49:57.572289       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:50:07.571182       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:50:07.571297       1 main.go:300] handling current node
	I1014 14:50:07.571327       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:50:07.571335       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	I1014 14:50:17.563535       1 main.go:296] Handling node with IPs: map[172.20.108.120:{}]
	I1014 14:50:17.563585       1 main.go:300] handling current node
	I1014 14:50:17.563604       1 main.go:296] Handling node with IPs: map[172.20.111.174:{}]
	I1014 14:50:17.563610       1 main.go:323] Node ha-132600-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [b661cb671310] <==
	I1014 14:21:58.925388       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1014 14:21:58.925426       1 policy_source.go:224] refreshing policies
	I1014 14:21:58.928296       1 controller.go:615] quota admission added evaluator for: namespaces
	E1014 14:21:58.950810       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1014 14:21:59.174958       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 14:21:59.819127       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1014 14:21:59.829797       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1014 14:21:59.829835       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 14:22:00.942976       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 14:22:01.015766       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 14:22:01.150423       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1014 14:22:01.164901       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.20.108.120]
	I1014 14:22:01.165818       1 controller.go:615] quota admission added evaluator for: endpoints
	I1014 14:22:01.175247       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 14:22:01.835583       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1014 14:22:03.554010       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1014 14:22:03.589407       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1014 14:22:03.617368       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1014 14:22:07.346752       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1014 14:22:07.516653       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1014 14:38:03.250866       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57382: use of closed network connection
	E1014 14:38:04.469699       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57390: use of closed network connection
	E1014 14:38:05.596658       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57398: use of closed network connection
	E1014 14:38:40.228042       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57419: use of closed network connection
	E1014 14:38:50.673483       1 conn.go:339] Error on socket receive: read tcp 172.20.111.254:8443->172.20.96.1:57421: use of closed network connection
	
	
	==> kube-controller-manager [4a8cce31aa3a] <==
	I1014 14:42:14.099308       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-132600-m03" podCIDRs=["10.244.1.0/24"]
	I1014 14:42:14.099376       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	E1014 14:42:14.131350       1 range_allocator.go:427] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"ha-132600-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.2.0/24\", \"10.244.1.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-132600-m03" podCIDRs=["10.244.2.0/24"]
	E1014 14:42:14.131490       1 range_allocator.go:433] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"ha-132600-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.2.0/24\", \"10.244.1.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="ha-132600-m03"
	E1014 14:42:14.131543       1 range_allocator.go:246] "Unhandled Error" err="error syncing 'ha-132600-m03': failed to patch node CIDR: Node \"ha-132600-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.2.0/24\", \"10.244.1.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I1014 14:42:14.132334       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:14.137556       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:14.252734       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:14.858650       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:17.081634       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-132600-m03"
	I1014 14:42:17.154621       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:24.267607       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:44.660569       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:46.187605       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-132600-m03"
	I1014 14:42:46.189657       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:46.206592       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:46.221387       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51µs"
	I1014 14:42:46.232286       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51.3µs"
	I1014 14:42:46.252360       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="196µs"
	I1014 14:42:47.108442       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:42:49.071073       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="17.613156ms"
	I1014 14:42:49.071582       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="49.8µs"
	I1014 14:43:14.925968       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	I1014 14:47:05.189799       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600"
	I1014 14:48:21.846100       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-132600-m03"
	
	
	==> kube-proxy [4745a4b0dc37] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1014 14:22:09.024513       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1014 14:22:09.047174       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.20.108.120"]
	E1014 14:22:09.047308       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 14:22:09.124346       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 14:22:09.124494       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 14:22:09.124529       1 server_linux.go:169] "Using iptables Proxier"
	I1014 14:22:09.128809       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 14:22:09.129721       1 server.go:483] "Version info" version="v1.31.1"
	I1014 14:22:09.129805       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 14:22:09.131652       1 config.go:199] "Starting service config controller"
	I1014 14:22:09.131988       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 14:22:09.132269       1 config.go:105] "Starting endpoint slice config controller"
	I1014 14:22:09.132619       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 14:22:09.133592       1 config.go:328] "Starting node config controller"
	I1014 14:22:09.135184       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 14:22:09.232621       1 shared_informer.go:320] Caches are synced for service config
	I1014 14:22:09.233933       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 14:22:09.236054       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [35c870864a80] <==
	W1014 14:21:59.944424       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1014 14:21:59.944452       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:21:59.979078       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1014 14:21:59.979365       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.111250       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1014 14:22:00.111664       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.192015       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1014 14:22:00.192346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.196984       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1014 14:22:00.197112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.197208       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1014 14:22:00.197241       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.212172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1014 14:22:00.212214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.260144       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1014 14:22:00.261002       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.276235       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1014 14:22:00.276419       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.295031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1014 14:22:00.295892       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 14:22:00.311664       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1014 14:22:00.313212       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1014 14:22:00.354351       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1014 14:22:00.355204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 14:22:02.117335       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 14 14:46:03 ha-132600 kubelet[2324]: E1014 14:46:03.687832    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:46:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:46:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:46:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:46:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:47:03 ha-132600 kubelet[2324]: E1014 14:47:03.679016    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:47:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:47:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:47:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:47:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:48:03 ha-132600 kubelet[2324]: E1014 14:48:03.676932    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:48:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:48:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:48:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:48:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:49:03 ha-132600 kubelet[2324]: E1014 14:49:03.676825    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:49:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:49:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:49:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:49:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:50:03 ha-132600 kubelet[2324]: E1014 14:50:03.677986    2324 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:50:03 ha-132600 kubelet[2324]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:50:03 ha-132600 kubelet[2324]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:50:03 ha-132600 kubelet[2324]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:50:03 ha-132600 kubelet[2324]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-132600 -n ha-132600
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-132600 -n ha-132600: (11.7844319s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-132600 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-rng7p
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-132600 describe pod busybox-7dff88458-rng7p
helpers_test.go:282: (dbg) kubectl --context ha-132600 describe pod busybox-7dff88458-rng7p:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-rng7p
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9hpj2 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-9hpj2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                    From               Message
	  ----     ------            ----                   ----               -------
	  Warning  FailedScheduling  8m59s (x4 over 24m)    default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  2m29s (x3 over 7m46s)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (100.05s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (55.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-671000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-671000 -- exec busybox-7dff88458-bnqj6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-671000 -- exec busybox-7dff88458-bnqj6 -- sh -c "ping -c 1 172.20.96.1"
E1014 08:27:00.466973     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-671000 -- exec busybox-7dff88458-bnqj6 -- sh -c "ping -c 1 172.20.96.1": exit status 1 (10.425639s)

                                                
                                                
-- stdout --
	PING 172.20.96.1 (172.20.96.1): 56 data bytes
	
	--- 172.20.96.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.20.96.1) from pod (busybox-7dff88458-bnqj6): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-671000 -- exec busybox-7dff88458-vlp7j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-671000 -- exec busybox-7dff88458-vlp7j -- sh -c "ping -c 1 172.20.96.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-671000 -- exec busybox-7dff88458-vlp7j -- sh -c "ping -c 1 172.20.96.1": exit status 1 (10.4249769s)

                                                
                                                
-- stdout --
	PING 172.20.96.1 (172.20.96.1): 56 data bytes
	
	--- 172.20.96.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.20.96.1) from pod (busybox-7dff88458-vlp7j): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-671000 -n multinode-671000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-671000 -n multinode-671000: (11.6238533s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 logs -n 25: (8.3081606s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-953800 ssh -- ls                    | mount-start-2-953800 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:15 PDT | 14 Oct 24 08:15 PDT |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-953800                           | mount-start-1-953800 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:15 PDT | 14 Oct 24 08:16 PDT |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-953800 ssh -- ls                    | mount-start-2-953800 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:16 PDT | 14 Oct 24 08:16 PDT |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-953800                           | mount-start-2-953800 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:16 PDT | 14 Oct 24 08:17 PDT |
	| start   | -p mount-start-2-953800                           | mount-start-2-953800 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:17 PDT | 14 Oct 24 08:18 PDT |
	| mount   | C:\Users\jenkins.minikube1:/minikube-host         | mount-start-2-953800 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:18 PDT |                     |
	|         | --profile mount-start-2-953800 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-953800 ssh -- ls                    | mount-start-2-953800 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:19 PDT | 14 Oct 24 08:19 PDT |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-953800                           | mount-start-2-953800 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:19 PDT | 14 Oct 24 08:19 PDT |
	| delete  | -p mount-start-1-953800                           | mount-start-1-953800 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:19 PDT | 14 Oct 24 08:19 PDT |
	| start   | -p multinode-671000                               | multinode-671000     | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:19 PDT | 14 Oct 24 08:26 PDT |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-671000 -- apply -f                   | multinode-671000     | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:26 PDT | 14 Oct 24 08:26 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-671000 -- rollout                    | multinode-671000     | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:26 PDT | 14 Oct 24 08:26 PDT |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-671000 -- get pods -o                | multinode-671000     | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:26 PDT | 14 Oct 24 08:26 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-671000 -- get pods -o                | multinode-671000     | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:26 PDT | 14 Oct 24 08:26 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-671000 -- exec                       | multinode-671000     | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:26 PDT | 14 Oct 24 08:26 PDT |
	|         | busybox-7dff88458-bnqj6 --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-671000 -- exec                       | multinode-671000     | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:26 PDT | 14 Oct 24 08:26 PDT |
	|         | busybox-7dff88458-vlp7j --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-671000 -- exec                       | multinode-671000     | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:26 PDT | 14 Oct 24 08:26 PDT |
	|         | busybox-7dff88458-bnqj6 --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-671000 -- exec                       | multinode-671000     | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:26 PDT | 14 Oct 24 08:26 PDT |
	|         | busybox-7dff88458-vlp7j --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-671000 -- exec                       | multinode-671000     | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:26 PDT | 14 Oct 24 08:26 PDT |
	|         | busybox-7dff88458-bnqj6 -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-671000 -- exec                       | multinode-671000     | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:26 PDT | 14 Oct 24 08:26 PDT |
	|         | busybox-7dff88458-vlp7j -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-671000 -- get pods -o                | multinode-671000     | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:26 PDT | 14 Oct 24 08:26 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-671000 -- exec                       | multinode-671000     | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:26 PDT | 14 Oct 24 08:26 PDT |
	|         | busybox-7dff88458-bnqj6                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-671000 -- exec                       | multinode-671000     | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:26 PDT |                     |
	|         | busybox-7dff88458-bnqj6 -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.20.96.1                          |                      |                   |         |                     |                     |
	| kubectl | -p multinode-671000 -- exec                       | multinode-671000     | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:27 PDT | 14 Oct 24 08:27 PDT |
	|         | busybox-7dff88458-vlp7j                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-671000 -- exec                       | multinode-671000     | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:27 PDT |                     |
	|         | busybox-7dff88458-vlp7j -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.20.96.1                          |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 08:19:39
	Running on machine: minikube1
	Binary: Built with gc go1.23.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 08:19:39.764804    3988 out.go:345] Setting OutFile to fd 1096 ...
	I1014 08:19:39.766873    3988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 08:19:39.766873    3988 out.go:358] Setting ErrFile to fd 1264...
	I1014 08:19:39.766873    3988 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 08:19:39.794019    3988 out.go:352] Setting JSON to false
	I1014 08:19:39.797748    3988 start.go:129] hostinfo: {"hostname":"minikube1","uptime":104694,"bootTime":1728814485,"procs":205,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5011 Build 19045.5011","kernelVersion":"10.0.19045.5011 Build 19045.5011","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1014 08:19:39.797748    3988 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 08:19:39.806152    3988 out.go:177] * [multinode-671000] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	I1014 08:19:39.810178    3988 notify.go:220] Checking for updates...
	I1014 08:19:39.811803    3988 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 08:19:39.816853    3988 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 08:19:39.819481    3988 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1014 08:19:39.821483    3988 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 08:19:39.824486    3988 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 08:19:39.827489    3988 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 08:19:45.076811    3988 out.go:177] * Using the hyperv driver based on user configuration
	I1014 08:19:45.080620    3988 start.go:297] selected driver: hyperv
	I1014 08:19:45.080620    3988 start.go:901] validating driver "hyperv" against <nil>
	I1014 08:19:45.080620    3988 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 08:19:45.128266    3988 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 08:19:45.129761    3988 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 08:19:45.129761    3988 cni.go:84] Creating CNI manager for ""
	I1014 08:19:45.129761    3988 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1014 08:19:45.129761    3988 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 08:19:45.129761    3988 start.go:340] cluster config:
	{Name:multinode-671000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 08:19:45.130557    3988 iso.go:125] acquiring lock: {Name:mkb71635bcbccba29dce9048ad4cc71430a7e577 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 08:19:45.135484    3988 out.go:177] * Starting "multinode-671000" primary control-plane node in "multinode-671000" cluster
	I1014 08:19:45.138080    3988 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 08:19:45.138080    3988 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1014 08:19:45.138080    3988 cache.go:56] Caching tarball of preloaded images
	I1014 08:19:45.138080    3988 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1014 08:19:45.138901    3988 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 08:19:45.139539    3988 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\config.json ...
	I1014 08:19:45.139671    3988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\config.json: {Name:mk72021f863ca8f35036f3d4c32b5a0b47bc2781 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 08:19:45.140995    3988 start.go:360] acquireMachinesLock for multinode-671000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 08:19:45.141156    3988 start.go:364] duration metric: took 93.6µs to acquireMachinesLock for "multinode-671000"
	I1014 08:19:45.141156    3988 start.go:93] Provisioning new machine with config: &{Name:multinode-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:multinode-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 08:19:45.141156    3988 start.go:125] createHost starting for "" (driver="hyperv")
	I1014 08:19:45.144909    3988 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 08:19:45.145188    3988 start.go:159] libmachine.API.Create for "multinode-671000" (driver="hyperv")
	I1014 08:19:45.145188    3988 client.go:168] LocalClient.Create starting
	I1014 08:19:45.145385    3988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I1014 08:19:45.145946    3988 main.go:141] libmachine: Decoding PEM data...
	I1014 08:19:45.145946    3988 main.go:141] libmachine: Parsing certificate...
	I1014 08:19:45.145946    3988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I1014 08:19:45.145946    3988 main.go:141] libmachine: Decoding PEM data...
	I1014 08:19:45.145946    3988 main.go:141] libmachine: Parsing certificate...
	I1014 08:19:45.146513    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1014 08:19:47.253545    3988 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1014 08:19:47.253545    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:19:47.253753    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1014 08:19:48.957498    3988 main.go:141] libmachine: [stdout =====>] : False
	
	I1014 08:19:48.958149    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:19:48.958149    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 08:19:50.448874    3988 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 08:19:50.449082    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:19:50.449171    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 08:19:53.952655    3988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 08:19:53.952655    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:19:53.955084    3988 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 08:19:54.477542    3988 main.go:141] libmachine: Creating SSH key...
	I1014 08:19:54.544652    3988 main.go:141] libmachine: Creating VM...
	I1014 08:19:54.544652    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 08:19:57.311673    3988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 08:19:57.311736    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:19:57.311736    3988 main.go:141] libmachine: Using switch "Default Switch"
	I1014 08:19:57.311736    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 08:19:59.046313    3988 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 08:19:59.046556    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:19:59.046556    3988 main.go:141] libmachine: Creating VHD
	I1014 08:19:59.046625    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000\fixed.vhd' -SizeBytes 10MB -Fixed
	I1014 08:20:02.684663    3988 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 977A48CA-83BB-4612-8D21-68196CE0492E
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1014 08:20:02.685503    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:20:02.685503    3988 main.go:141] libmachine: Writing magic tar header
	I1014 08:20:02.685607    3988 main.go:141] libmachine: Writing SSH key tar header
	I1014 08:20:02.700158    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000\disk.vhd' -VHDType Dynamic -DeleteSource
	I1014 08:20:05.820805    3988 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:20:05.820805    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:20:05.821002    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000\disk.vhd' -SizeBytes 20000MB
	I1014 08:20:08.442470    3988 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:20:08.442470    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:20:08.442470    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-671000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1014 08:20:12.009376    3988 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-671000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1014 08:20:12.009625    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:20:12.009721    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-671000 -DynamicMemoryEnabled $false
	I1014 08:20:14.188725    3988 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:20:14.189552    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:20:14.189620    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-671000 -Count 2
	I1014 08:20:16.262248    3988 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:20:16.262248    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:20:16.263065    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-671000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000\boot2docker.iso'
	I1014 08:20:18.763816    3988 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:20:18.763816    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:20:18.764005    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-671000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000\disk.vhd'
	I1014 08:20:21.317380    3988 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:20:21.317638    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:20:21.317638    3988 main.go:141] libmachine: Starting VM...
	I1014 08:20:21.317717    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-671000
	I1014 08:20:24.472636    3988 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:20:24.472636    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:20:24.472636    3988 main.go:141] libmachine: Waiting for host to start...
	I1014 08:20:24.473696    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:20:26.671531    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:20:26.671561    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:20:26.671561    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:20:29.127047    3988 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:20:29.127376    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:20:30.128156    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:20:32.283259    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:20:32.283259    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:20:32.283890    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:20:34.764481    3988 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:20:34.764633    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:20:35.765466    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:20:37.934379    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:20:37.934813    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:20:37.935158    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:20:40.362444    3988 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:20:40.363397    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:20:41.364223    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:20:43.488696    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:20:43.488964    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:20:43.489109    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:20:45.900689    3988 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:20:45.901318    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:20:46.902254    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:20:49.037080    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:20:49.038100    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:20:49.038125    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:20:51.519903    3988 main.go:141] libmachine: [stdout =====>] : 172.20.100.167
	
	I1014 08:20:51.520701    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:20:51.520701    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:20:53.582442    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:20:53.582442    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:20:53.582442    3988 machine.go:93] provisionDockerMachine start ...
	I1014 08:20:53.582442    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:20:55.652122    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:20:55.652122    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:20:55.652801    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:20:58.091418    3988 main.go:141] libmachine: [stdout =====>] : 172.20.100.167
	
	I1014 08:20:58.091418    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:20:58.098978    3988 main.go:141] libmachine: Using SSH client type: native
	I1014 08:20:58.111935    3988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.100.167 22 <nil> <nil>}
	I1014 08:20:58.111935    3988 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 08:20:58.232943    3988 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 08:20:58.233114    3988 buildroot.go:166] provisioning hostname "multinode-671000"
	I1014 08:20:58.233217    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:21:00.314969    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:21:00.315160    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:21:00.315215    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:21:02.787808    3988 main.go:141] libmachine: [stdout =====>] : 172.20.100.167
	
	I1014 08:21:02.788821    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:21:02.794463    3988 main.go:141] libmachine: Using SSH client type: native
	I1014 08:21:02.795125    3988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.100.167 22 <nil> <nil>}
	I1014 08:21:02.795125    3988 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-671000 && echo "multinode-671000" | sudo tee /etc/hostname
	I1014 08:21:02.950006    3988 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-671000
	
	I1014 08:21:02.950006    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:21:05.000302    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:21:05.000302    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:21:05.000302    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:21:07.457398    3988 main.go:141] libmachine: [stdout =====>] : 172.20.100.167
	
	I1014 08:21:07.457398    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:21:07.463930    3988 main.go:141] libmachine: Using SSH client type: native
	I1014 08:21:07.464626    3988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.100.167 22 <nil> <nil>}
	I1014 08:21:07.464626    3988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-671000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-671000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-671000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 08:21:07.601805    3988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 08:21:07.601805    3988 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1014 08:21:07.601805    3988 buildroot.go:174] setting up certificates
	I1014 08:21:07.601805    3988 provision.go:84] configureAuth start
	I1014 08:21:07.601805    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:21:09.681990    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:21:09.681990    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:21:09.682314    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:21:12.148324    3988 main.go:141] libmachine: [stdout =====>] : 172.20.100.167
	
	I1014 08:21:12.148324    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:21:12.149064    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:21:14.239140    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:21:14.239140    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:21:14.239570    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:21:16.705711    3988 main.go:141] libmachine: [stdout =====>] : 172.20.100.167
	
	I1014 08:21:16.706589    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:21:16.706656    3988 provision.go:143] copyHostCerts
	I1014 08:21:16.706845    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1014 08:21:16.706845    3988 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1014 08:21:16.706845    3988 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1014 08:21:16.707651    3988 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1014 08:21:16.708889    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1014 08:21:16.709028    3988 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1014 08:21:16.709028    3988 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1014 08:21:16.709028    3988 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1014 08:21:16.710584    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1014 08:21:16.710893    3988 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1014 08:21:16.710893    3988 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1014 08:21:16.711392    3988 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I1014 08:21:16.712356    3988 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-671000 san=[127.0.0.1 172.20.100.167 localhost minikube multinode-671000]
	I1014 08:21:17.276704    3988 provision.go:177] copyRemoteCerts
	I1014 08:21:17.286052    3988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 08:21:17.287039    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:21:19.372904    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:21:19.372904    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:21:19.373652    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:21:21.837166    3988 main.go:141] libmachine: [stdout =====>] : 172.20.100.167
	
	I1014 08:21:21.837235    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:21:21.838071    3988 sshutil.go:53] new ssh client: &{IP:172.20.100.167 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000\id_rsa Username:docker}
	I1014 08:21:21.944223    3988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6581637s)
	I1014 08:21:21.944223    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1014 08:21:21.944223    3988 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 08:21:21.995823    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1014 08:21:21.995928    3988 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I1014 08:21:22.046800    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1014 08:21:22.047379    3988 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 08:21:22.093873    3988 provision.go:87] duration metric: took 14.4920458s to configureAuth
	I1014 08:21:22.093990    3988 buildroot.go:189] setting minikube options for container-runtime
	I1014 08:21:22.094615    3988 config.go:182] Loaded profile config "multinode-671000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 08:21:22.094766    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:21:24.172147    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:21:24.172147    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:21:24.172147    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:21:26.642191    3988 main.go:141] libmachine: [stdout =====>] : 172.20.100.167
	
	I1014 08:21:26.642818    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:21:26.648003    3988 main.go:141] libmachine: Using SSH client type: native
	I1014 08:21:26.648003    3988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.100.167 22 <nil> <nil>}
	I1014 08:21:26.648533    3988 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1014 08:21:26.770485    3988 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1014 08:21:26.770485    3988 buildroot.go:70] root file system type: tmpfs
	I1014 08:21:26.770689    3988 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1014 08:21:26.770856    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:21:28.821024    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:21:28.821120    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:21:28.821199    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:21:31.283009    3988 main.go:141] libmachine: [stdout =====>] : 172.20.100.167
	
	I1014 08:21:31.283641    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:21:31.289196    3988 main.go:141] libmachine: Using SSH client type: native
	I1014 08:21:31.289577    3988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.100.167 22 <nil> <nil>}
	I1014 08:21:31.289577    3988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1014 08:21:31.444515    3988 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1014 08:21:31.445797    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:21:33.510983    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:21:33.511986    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:21:33.511986    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:21:35.944741    3988 main.go:141] libmachine: [stdout =====>] : 172.20.100.167
	
	I1014 08:21:35.944971    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:21:35.953581    3988 main.go:141] libmachine: Using SSH client type: native
	I1014 08:21:35.954186    3988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.100.167 22 <nil> <nil>}
	I1014 08:21:35.954186    3988 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1014 08:21:38.113186    3988 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1014 08:21:38.113271    3988 machine.go:96] duration metric: took 44.5307625s to provisionDockerMachine
	I1014 08:21:38.113331    3988 client.go:171] duration metric: took 1m52.9679136s to LocalClient.Create
	I1014 08:21:38.113331    3988 start.go:167] duration metric: took 1m52.9679736s to libmachine.API.Create "multinode-671000"
	I1014 08:21:38.113419    3988 start.go:293] postStartSetup for "multinode-671000" (driver="hyperv")
	I1014 08:21:38.113419    3988 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 08:21:38.124937    3988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 08:21:38.124937    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:21:40.181988    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:21:40.182319    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:21:40.182319    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:21:42.607166    3988 main.go:141] libmachine: [stdout =====>] : 172.20.100.167
	
	I1014 08:21:42.607476    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:21:42.607645    3988 sshutil.go:53] new ssh client: &{IP:172.20.100.167 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000\id_rsa Username:docker}
	I1014 08:21:42.714205    3988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5892604s)
	I1014 08:21:42.724736    3988 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 08:21:42.732594    3988 command_runner.go:130] > NAME=Buildroot
	I1014 08:21:42.732594    3988 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1014 08:21:42.732594    3988 command_runner.go:130] > ID=buildroot
	I1014 08:21:42.732594    3988 command_runner.go:130] > VERSION_ID=2023.02.9
	I1014 08:21:42.732594    3988 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1014 08:21:42.732594    3988 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 08:21:42.732594    3988 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1014 08:21:42.733296    3988 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1014 08:21:42.734488    3988 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> 9362.pem in /etc/ssl/certs
	I1014 08:21:42.734563    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /etc/ssl/certs/9362.pem
	I1014 08:21:42.744958    3988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 08:21:42.762550    3988 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /etc/ssl/certs/9362.pem (1708 bytes)
	I1014 08:21:42.807805    3988 start.go:296] duration metric: took 4.6943785s for postStartSetup
	I1014 08:21:42.811108    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:21:44.855599    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:21:44.855661    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:21:44.855661    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:21:47.288431    3988 main.go:141] libmachine: [stdout =====>] : 172.20.100.167
	
	I1014 08:21:47.288666    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:21:47.288666    3988 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\config.json ...
	I1014 08:21:47.291764    3988 start.go:128] duration metric: took 2m2.1503575s to createHost
	I1014 08:21:47.291764    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:21:49.308159    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:21:49.308159    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:21:49.308739    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:21:51.742169    3988 main.go:141] libmachine: [stdout =====>] : 172.20.100.167
	
	I1014 08:21:51.742169    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:21:51.748254    3988 main.go:141] libmachine: Using SSH client type: native
	I1014 08:21:51.748921    3988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.100.167 22 <nil> <nil>}
	I1014 08:21:51.748921    3988 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 08:21:51.876439    3988 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728919311.875473213
	
	I1014 08:21:51.876439    3988 fix.go:216] guest clock: 1728919311.875473213
	I1014 08:21:51.876439    3988 fix.go:229] Guest: 2024-10-14 08:21:51.875473213 -0700 PDT Remote: 2024-10-14 08:21:47.2917648 -0700 PDT m=+127.632750001 (delta=4.583708413s)
	I1014 08:21:51.876439    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:21:53.930566    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:21:53.930566    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:21:53.931482    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:21:56.448807    3988 main.go:141] libmachine: [stdout =====>] : 172.20.100.167
	
	I1014 08:21:56.449439    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:21:56.455314    3988 main.go:141] libmachine: Using SSH client type: native
	I1014 08:21:56.455869    3988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.100.167 22 <nil> <nil>}
	I1014 08:21:56.455970    3988 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1728919311
	I1014 08:21:56.599359    3988 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 14 15:21:51 UTC 2024
	
	I1014 08:21:56.599359    3988 fix.go:236] clock set: Mon Oct 14 15:21:51 UTC 2024
	 (err=<nil>)
	I1014 08:21:56.599359    3988 start.go:83] releasing machines lock for "multinode-671000", held for 2m11.4580056s
	I1014 08:21:56.599359    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:21:58.685414    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:21:58.685414    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:21:58.685549    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:22:01.203370    3988 main.go:141] libmachine: [stdout =====>] : 172.20.100.167
	
	I1014 08:22:01.203907    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:22:01.207886    3988 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1014 08:22:01.207972    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:22:01.216855    3988 ssh_runner.go:195] Run: cat /version.json
	I1014 08:22:01.216855    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:22:03.370563    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:22:03.370563    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:22:03.370563    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:22:03.370563    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:22:03.370563    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:22:03.370781    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:22:05.930103    3988 main.go:141] libmachine: [stdout =====>] : 172.20.100.167
	
	I1014 08:22:05.930103    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:22:05.930284    3988 sshutil.go:53] new ssh client: &{IP:172.20.100.167 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000\id_rsa Username:docker}
	I1014 08:22:05.955549    3988 main.go:141] libmachine: [stdout =====>] : 172.20.100.167
	
	I1014 08:22:05.955549    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:22:05.956154    3988 sshutil.go:53] new ssh client: &{IP:172.20.100.167 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000\id_rsa Username:docker}
	I1014 08:22:06.015477    3988 command_runner.go:130] > {"iso_version": "v1.34.0-1728382514-19774", "kicbase_version": "v0.0.45-1728063813-19756", "minikube_version": "v1.34.0", "commit": "cf9f11c2b0369efc07a929c4a1fdb2b4b3c62ee9"}
	I1014 08:22:06.016639    3988 ssh_runner.go:235] Completed: cat /version.json: (4.7996807s)
	I1014 08:22:06.028413    3988 ssh_runner.go:195] Run: systemctl --version
	I1014 08:22:06.032628    3988 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I1014 08:22:06.033495    3988 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8256013s)
	W1014 08:22:06.033495    3988 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1014 08:22:06.042385    3988 command_runner.go:130] > systemd 252 (252)
	I1014 08:22:06.042385    3988 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1014 08:22:06.052888    3988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1014 08:22:06.061738    3988 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1014 08:22:06.062598    3988 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 08:22:06.073481    3988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 08:22:06.103438    3988 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1014 08:22:06.103537    3988 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 08:22:06.103537    3988 start.go:495] detecting cgroup driver to use...
	I1014 08:22:06.103979    3988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 08:22:06.142329    3988 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1014 08:22:06.153693    3988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	W1014 08:22:06.158627    3988 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1014 08:22:06.158627    3988 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1014 08:22:06.186929    3988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1014 08:22:06.205737    3988 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 08:22:06.216078    3988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 08:22:06.246366    3988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 08:22:06.279825    3988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 08:22:06.309477    3988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 08:22:06.341144    3988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 08:22:06.374955    3988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 08:22:06.407380    3988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 08:22:06.437362    3988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 08:22:06.466638    3988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 08:22:06.485384    3988 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 08:22:06.486092    3988 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 08:22:06.497340    3988 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 08:22:06.530891    3988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 08:22:06.558650    3988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:22:06.756469    3988 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 08:22:06.786678    3988 start.go:495] detecting cgroup driver to use...
	I1014 08:22:06.799496    3988 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1014 08:22:06.822245    3988 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1014 08:22:06.822997    3988 command_runner.go:130] > [Unit]
	I1014 08:22:06.822997    3988 command_runner.go:130] > Description=Docker Application Container Engine
	I1014 08:22:06.822997    3988 command_runner.go:130] > Documentation=https://docs.docker.com
	I1014 08:22:06.823060    3988 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1014 08:22:06.823060    3988 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1014 08:22:06.823060    3988 command_runner.go:130] > StartLimitBurst=3
	I1014 08:22:06.823098    3988 command_runner.go:130] > StartLimitIntervalSec=60
	I1014 08:22:06.823098    3988 command_runner.go:130] > [Service]
	I1014 08:22:06.823098    3988 command_runner.go:130] > Type=notify
	I1014 08:22:06.823098    3988 command_runner.go:130] > Restart=on-failure
	I1014 08:22:06.823098    3988 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1014 08:22:06.823098    3988 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1014 08:22:06.823098    3988 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1014 08:22:06.823098    3988 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1014 08:22:06.823098    3988 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1014 08:22:06.823098    3988 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1014 08:22:06.823098    3988 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1014 08:22:06.823098    3988 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1014 08:22:06.823098    3988 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1014 08:22:06.823098    3988 command_runner.go:130] > ExecStart=
	I1014 08:22:06.823098    3988 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1014 08:22:06.823098    3988 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1014 08:22:06.823098    3988 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1014 08:22:06.823098    3988 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1014 08:22:06.823098    3988 command_runner.go:130] > LimitNOFILE=infinity
	I1014 08:22:06.823098    3988 command_runner.go:130] > LimitNPROC=infinity
	I1014 08:22:06.823098    3988 command_runner.go:130] > LimitCORE=infinity
	I1014 08:22:06.823098    3988 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1014 08:22:06.823098    3988 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1014 08:22:06.823098    3988 command_runner.go:130] > TasksMax=infinity
	I1014 08:22:06.823098    3988 command_runner.go:130] > TimeoutStartSec=0
	I1014 08:22:06.823098    3988 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1014 08:22:06.823098    3988 command_runner.go:130] > Delegate=yes
	I1014 08:22:06.823098    3988 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1014 08:22:06.823098    3988 command_runner.go:130] > KillMode=process
	I1014 08:22:06.823098    3988 command_runner.go:130] > [Install]
	I1014 08:22:06.823098    3988 command_runner.go:130] > WantedBy=multi-user.target
	I1014 08:22:06.835064    3988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 08:22:06.870385    3988 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 08:22:06.905914    3988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 08:22:06.938401    3988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 08:22:06.972121    3988 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 08:22:07.034229    3988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 08:22:07.056927    3988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 08:22:07.090810    3988 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1014 08:22:07.102567    3988 ssh_runner.go:195] Run: which cri-dockerd
	I1014 08:22:07.109440    3988 command_runner.go:130] > /usr/bin/cri-dockerd
	I1014 08:22:07.119571    3988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1014 08:22:07.138941    3988 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1014 08:22:07.181779    3988 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1014 08:22:07.395687    3988 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1014 08:22:07.576269    3988 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1014 08:22:07.576476    3988 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1014 08:22:07.621091    3988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:22:07.826161    3988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 08:22:10.394490    3988 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5683252s)
	I1014 08:22:10.405575    3988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1014 08:22:10.446904    3988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 08:22:10.482057    3988 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1014 08:22:10.670662    3988 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1014 08:22:10.870297    3988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:22:11.068229    3988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1014 08:22:11.115356    3988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 08:22:11.149187    3988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:22:11.348011    3988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1014 08:22:11.453214    3988 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1014 08:22:11.465885    3988 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1014 08:22:11.476230    3988 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1014 08:22:11.476377    3988 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1014 08:22:11.476377    3988 command_runner.go:130] > Device: 0,22	Inode: 879         Links: 1
	I1014 08:22:11.476440    3988 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1014 08:22:11.476440    3988 command_runner.go:130] > Access: 2024-10-14 15:22:11.373204702 +0000
	I1014 08:22:11.476440    3988 command_runner.go:130] > Modify: 2024-10-14 15:22:11.373204702 +0000
	I1014 08:22:11.476440    3988 command_runner.go:130] > Change: 2024-10-14 15:22:11.376204705 +0000
	I1014 08:22:11.476509    3988 command_runner.go:130] >  Birth: -
	I1014 08:22:11.476509    3988 start.go:563] Will wait 60s for crictl version
	I1014 08:22:11.487041    3988 ssh_runner.go:195] Run: which crictl
	I1014 08:22:11.492607    3988 command_runner.go:130] > /usr/bin/crictl
	I1014 08:22:11.503866    3988 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 08:22:11.554187    3988 command_runner.go:130] > Version:  0.1.0
	I1014 08:22:11.554187    3988 command_runner.go:130] > RuntimeName:  docker
	I1014 08:22:11.554187    3988 command_runner.go:130] > RuntimeVersion:  27.3.1
	I1014 08:22:11.554579    3988 command_runner.go:130] > RuntimeApiVersion:  v1
	I1014 08:22:11.555129    3988 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1014 08:22:11.564556    3988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 08:22:11.598261    3988 command_runner.go:130] > 27.3.1
	I1014 08:22:11.608278    3988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 08:22:11.636719    3988 command_runner.go:130] > 27.3.1
	I1014 08:22:11.640577    3988 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1014 08:22:11.640750    3988 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1014 08:22:11.646365    3988 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1014 08:22:11.646365    3988 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1014 08:22:11.646365    3988 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1014 08:22:11.646365    3988 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:08:65:4d Flags:up|broadcast|multicast|running}
	I1014 08:22:11.648997    3988 ip.go:214] interface addr: fe80::b548:ff79:d464:75b8/64
	I1014 08:22:11.648997    3988 ip.go:214] interface addr: 172.20.96.1/20
	I1014 08:22:11.659024    3988 ssh_runner.go:195] Run: grep 172.20.96.1	host.minikube.internal$ /etc/hosts
	I1014 08:22:11.664929    3988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 08:22:11.685393    3988 kubeadm.go:883] updating cluster {Name:multinode-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:multinode-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.100.167 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 08:22:11.685937    3988 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 08:22:11.695027    3988 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 08:22:11.718389    3988 docker.go:689] Got preloaded images: 
	I1014 08:22:11.718474    3988 docker.go:695] registry.k8s.io/kube-apiserver:v1.31.1 wasn't preloaded
	I1014 08:22:11.729373    3988 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1014 08:22:11.747150    3988 command_runner.go:139] > {"Repositories":{}}
	I1014 08:22:11.758684    3988 ssh_runner.go:195] Run: which lz4
	I1014 08:22:11.764800    3988 command_runner.go:130] > /usr/bin/lz4
	I1014 08:22:11.764883    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1014 08:22:11.778463    3988 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 08:22:11.784621    3988 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 08:22:11.785227    3988 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 08:22:11.785227    3988 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (342028912 bytes)
	I1014 08:22:13.560724    3988 docker.go:653] duration metric: took 1.7958385s to copy over tarball
	I1014 08:22:13.573252    3988 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 08:22:22.115595    3988 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.5421333s)
	I1014 08:22:22.115713    3988 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 08:22:22.175966    3988 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1014 08:22:22.194436    3988 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.3":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.15-0":"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a":"sha256:2e96e5913fc06e3d26915af3d0f
2ca5048cc4b6327e661e80da792cbf8d8d9d4"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.31.1":"sha256:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb":"sha256:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.31.1":"sha256:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1":"sha256:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.31.1":"sha256:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44":"sha256:60c005f310ff3ad6d131805170f07d2946095307063eaaa5e
edcaf06a0a89561"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.31.1":"sha256:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0":"sha256:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.10":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136"}}}
	I1014 08:22:22.194651    3988 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I1014 08:22:22.240529    3988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:22:22.439765    3988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 08:22:25.613830    3988 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.1740608s)
	I1014 08:22:25.625290    3988 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 08:22:25.651129    3988 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I1014 08:22:25.651166    3988 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I1014 08:22:25.651166    3988 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 08:22:25.651166    3988 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I1014 08:22:25.651166    3988 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I1014 08:22:25.651166    3988 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I1014 08:22:25.651166    3988 command_runner.go:130] > registry.k8s.io/pause:3.10
	I1014 08:22:25.651166    3988 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 08:22:25.651253    3988 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1014 08:22:25.651253    3988 cache_images.go:84] Images are preloaded, skipping loading
	I1014 08:22:25.651253    3988 kubeadm.go:934] updating node { 172.20.100.167 8443 v1.31.1 docker true true} ...
	I1014 08:22:25.651536    3988 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-671000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.100.167
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 08:22:25.661084    3988 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1014 08:22:25.723790    3988 command_runner.go:130] > cgroupfs
	I1014 08:22:25.723935    3988 cni.go:84] Creating CNI manager for ""
	I1014 08:22:25.723935    3988 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 08:22:25.724005    3988 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 08:22:25.724083    3988 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.100.167 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-671000 NodeName:multinode-671000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.100.167"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.100.167 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 08:22:25.724422    3988 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.100.167
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-671000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.20.100.167"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.100.167"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 08:22:25.735690    3988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 08:22:25.752411    3988 command_runner.go:130] > kubeadm
	I1014 08:22:25.752411    3988 command_runner.go:130] > kubectl
	I1014 08:22:25.753417    3988 command_runner.go:130] > kubelet
	I1014 08:22:25.753417    3988 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 08:22:25.764030    3988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 08:22:25.781809    3988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1014 08:22:25.815356    3988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 08:22:25.844807    3988 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2300 bytes)
	I1014 08:22:25.890607    3988 ssh_runner.go:195] Run: grep 172.20.100.167	control-plane.minikube.internal$ /etc/hosts
	I1014 08:22:25.895770    3988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.100.167	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 08:22:25.931779    3988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:22:26.122859    3988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 08:22:26.152209    3988 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000 for IP: 172.20.100.167
	I1014 08:22:26.152209    3988 certs.go:194] generating shared ca certs ...
	I1014 08:22:26.152209    3988 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 08:22:26.153278    3988 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I1014 08:22:26.153689    3988 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I1014 08:22:26.153965    3988 certs.go:256] generating profile certs ...
	I1014 08:22:26.154783    3988 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\client.key
	I1014 08:22:26.154991    3988 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\client.crt with IP's: []
	I1014 08:22:26.291347    3988 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\client.crt ...
	I1014 08:22:26.291347    3988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\client.crt: {Name:mkd7d48d88b020da921a66ef6341cce1c8bb725a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 08:22:26.293031    3988 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\client.key ...
	I1014 08:22:26.293031    3988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\client.key: {Name:mk72ff1c9ae3ea3c824cdd7ac1bd9f3cc07b2166 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 08:22:26.293884    3988 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.key.8a486e0a
	I1014 08:22:26.294939    3988 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.crt.8a486e0a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.100.167]
	I1014 08:22:26.534975    3988 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.crt.8a486e0a ...
	I1014 08:22:26.534975    3988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.crt.8a486e0a: {Name:mk92ad6e27d1270a5469268a834a1024d79e9ec8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 08:22:26.536640    3988 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.key.8a486e0a ...
	I1014 08:22:26.536640    3988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.key.8a486e0a: {Name:mk1860e6405cdbf62eb2247077a2d684e52e934e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 08:22:26.537039    3988 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.crt.8a486e0a -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.crt
	I1014 08:22:26.552386    3988 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.key.8a486e0a -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.key
	I1014 08:22:26.553472    3988 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\proxy-client.key
	I1014 08:22:26.553472    3988 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\proxy-client.crt with IP's: []
	I1014 08:22:26.648055    3988 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\proxy-client.crt ...
	I1014 08:22:26.648055    3988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\proxy-client.crt: {Name:mk7b2ef6e133897a159c4d1c44b1d794f0ee3f80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 08:22:26.650186    3988 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\proxy-client.key ...
	I1014 08:22:26.650186    3988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\proxy-client.key: {Name:mk244509f88939a1c1dd7e78807343a07d516875 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 08:22:26.650551    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 08:22:26.651589    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1014 08:22:26.651589    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 08:22:26.651589    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 08:22:26.652321    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 08:22:26.652321    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 08:22:26.652697    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 08:22:26.663532    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 08:22:26.664439    3988 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem (1338 bytes)
	W1014 08:22:26.664848    3988 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936_empty.pem, impossibly tiny 0 bytes
	I1014 08:22:26.665062    3988 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1014 08:22:26.665062    3988 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1014 08:22:26.665478    3988 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1014 08:22:26.665478    3988 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1014 08:22:26.665478    3988 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem (1708 bytes)
	I1014 08:22:26.666435    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /usr/share/ca-certificates/9362.pem
	I1014 08:22:26.666435    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 08:22:26.666435    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem -> /usr/share/ca-certificates/936.pem
	I1014 08:22:26.668425    3988 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 08:22:26.717851    3988 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 08:22:26.760933    3988 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 08:22:26.808390    3988 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 08:22:26.851442    3988 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 08:22:26.893308    3988 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 08:22:26.935867    3988 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 08:22:26.980666    3988 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 08:22:27.029454    3988 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /usr/share/ca-certificates/9362.pem (1708 bytes)
	I1014 08:22:27.078069    3988 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 08:22:27.122513    3988 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem --> /usr/share/ca-certificates/936.pem (1338 bytes)
	I1014 08:22:27.163594    3988 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 08:22:27.201209    3988 ssh_runner.go:195] Run: openssl version
	I1014 08:22:27.210038    3988 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1014 08:22:27.222749    3988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9362.pem && ln -fs /usr/share/ca-certificates/9362.pem /etc/ssl/certs/9362.pem"
	I1014 08:22:27.253177    3988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9362.pem
	I1014 08:22:27.258612    3988 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 14 14:00 /usr/share/ca-certificates/9362.pem
	I1014 08:22:27.258612    3988 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 14:00 /usr/share/ca-certificates/9362.pem
	I1014 08:22:27.272106    3988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9362.pem
	I1014 08:22:27.279666    3988 command_runner.go:130] > 3ec20f2e
	I1014 08:22:27.290444    3988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9362.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 08:22:27.321778    3988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 08:22:27.351603    3988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 08:22:27.358399    3988 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 14 13:44 /usr/share/ca-certificates/minikubeCA.pem
	I1014 08:22:27.358399    3988 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:44 /usr/share/ca-certificates/minikubeCA.pem
	I1014 08:22:27.367380    3988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 08:22:27.376232    3988 command_runner.go:130] > b5213941
	I1014 08:22:27.387197    3988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 08:22:27.416107    3988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936.pem && ln -fs /usr/share/ca-certificates/936.pem /etc/ssl/certs/936.pem"
	I1014 08:22:27.445224    3988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936.pem
	I1014 08:22:27.451730    3988 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 14 14:00 /usr/share/ca-certificates/936.pem
	I1014 08:22:27.451730    3988 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 14:00 /usr/share/ca-certificates/936.pem
	I1014 08:22:27.462964    3988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936.pem
	I1014 08:22:27.470674    3988 command_runner.go:130] > 51391683
	I1014 08:22:27.481730    3988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/936.pem /etc/ssl/certs/51391683.0"
	I1014 08:22:27.511704    3988 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 08:22:27.517854    3988 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 08:22:27.518293    3988 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 08:22:27.518829    3988 kubeadm.go:392] StartCluster: {Name:multinode-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:multinode-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.100.167 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 08:22:27.527727    3988 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1014 08:22:27.565073    3988 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 08:22:27.581356    3988 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1014 08:22:27.581356    3988 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1014 08:22:27.581356    3988 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1014 08:22:27.593359    3988 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 08:22:27.622956    3988 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 08:22:27.640318    3988 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1014 08:22:27.640373    3988 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1014 08:22:27.640373    3988 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1014 08:22:27.640425    3988 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 08:22:27.640500    3988 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 08:22:27.640564    3988 kubeadm.go:157] found existing configuration files:
	
	I1014 08:22:27.653271    3988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 08:22:27.669909    3988 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 08:22:27.669909    3988 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 08:22:27.681664    3988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 08:22:27.711826    3988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 08:22:27.728847    3988 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 08:22:27.729728    3988 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 08:22:27.742510    3988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 08:22:27.773890    3988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 08:22:27.789394    3988 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 08:22:27.789394    3988 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 08:22:27.800770    3988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 08:22:27.830157    3988 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 08:22:27.844075    3988 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 08:22:27.844248    3988 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 08:22:27.856411    3988 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 08:22:27.872001    3988 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 08:22:28.291098    3988 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 08:22:28.291098    3988 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 08:22:40.314874    3988 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 08:22:40.314874    3988 command_runner.go:130] > [init] Using Kubernetes version: v1.31.1
	I1014 08:22:40.314874    3988 command_runner.go:130] > [preflight] Running pre-flight checks
	I1014 08:22:40.314874    3988 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 08:22:40.314874    3988 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 08:22:40.314874    3988 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 08:22:40.315883    3988 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 08:22:40.315883    3988 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 08:22:40.315883    3988 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 08:22:40.315883    3988 command_runner.go:130] > [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 08:22:40.315883    3988 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 08:22:40.315883    3988 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 08:22:40.318829    3988 out.go:235]   - Generating certificates and keys ...
	I1014 08:22:40.319053    3988 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1014 08:22:40.319109    3988 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 08:22:40.319320    3988 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 08:22:40.319320    3988 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1014 08:22:40.319567    3988 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 08:22:40.319624    3988 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 08:22:40.319787    3988 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1014 08:22:40.319818    3988 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1014 08:22:40.320073    3988 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1014 08:22:40.320073    3988 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1014 08:22:40.320073    3988 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1014 08:22:40.320073    3988 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1014 08:22:40.320073    3988 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1014 08:22:40.320073    3988 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1014 08:22:40.320653    3988 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-671000] and IPs [172.20.100.167 127.0.0.1 ::1]
	I1014 08:22:40.320653    3988 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-671000] and IPs [172.20.100.167 127.0.0.1 ::1]
	I1014 08:22:40.320911    3988 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1014 08:22:40.320911    3988 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1014 08:22:40.321031    3988 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-671000] and IPs [172.20.100.167 127.0.0.1 ::1]
	I1014 08:22:40.321031    3988 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-671000] and IPs [172.20.100.167 127.0.0.1 ::1]
	I1014 08:22:40.321031    3988 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 08:22:40.321031    3988 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 08:22:40.321031    3988 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 08:22:40.321576    3988 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 08:22:40.321576    3988 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1014 08:22:40.321745    3988 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1014 08:22:40.321931    3988 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 08:22:40.322017    3988 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 08:22:40.322176    3988 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 08:22:40.322246    3988 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 08:22:40.322393    3988 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 08:22:40.322393    3988 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 08:22:40.322393    3988 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 08:22:40.322393    3988 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 08:22:40.322393    3988 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 08:22:40.322393    3988 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 08:22:40.322948    3988 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 08:22:40.322948    3988 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 08:22:40.323245    3988 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 08:22:40.323319    3988 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 08:22:40.323369    3988 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 08:22:40.323369    3988 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 08:22:40.325711    3988 out.go:235]   - Booting up control plane ...
	I1014 08:22:40.325711    3988 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 08:22:40.325711    3988 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 08:22:40.325711    3988 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 08:22:40.325711    3988 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 08:22:40.325711    3988 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 08:22:40.325711    3988 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 08:22:40.326673    3988 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 08:22:40.326673    3988 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 08:22:40.326673    3988 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 08:22:40.326673    3988 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 08:22:40.326673    3988 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1014 08:22:40.326673    3988 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 08:22:40.326673    3988 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 08:22:40.326673    3988 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 08:22:40.327660    3988 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 08:22:40.327660    3988 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 08:22:40.327660    3988 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002278422s
	I1014 08:22:40.327660    3988 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.002278422s
	I1014 08:22:40.327660    3988 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 08:22:40.327660    3988 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 08:22:40.327660    3988 command_runner.go:130] > [api-check] The API server is healthy after 6.502805444s
	I1014 08:22:40.327660    3988 kubeadm.go:310] [api-check] The API server is healthy after 6.502805444s
	I1014 08:22:40.327660    3988 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 08:22:40.327660    3988 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 08:22:40.328748    3988 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 08:22:40.328748    3988 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 08:22:40.328748    3988 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1014 08:22:40.328748    3988 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 08:22:40.328748    3988 kubeadm.go:310] [mark-control-plane] Marking the node multinode-671000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 08:22:40.328748    3988 command_runner.go:130] > [mark-control-plane] Marking the node multinode-671000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 08:22:40.329512    3988 command_runner.go:130] > [bootstrap-token] Using token: u8ls8j.ymqiz2ofkyw9gyoj
	I1014 08:22:40.329554    3988 kubeadm.go:310] [bootstrap-token] Using token: u8ls8j.ymqiz2ofkyw9gyoj
	I1014 08:22:40.332558    3988 out.go:235]   - Configuring RBAC rules ...
	I1014 08:22:40.332558    3988 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 08:22:40.332558    3988 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 08:22:40.332558    3988 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 08:22:40.332558    3988 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 08:22:40.333557    3988 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 08:22:40.333557    3988 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 08:22:40.333557    3988 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 08:22:40.333557    3988 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 08:22:40.333557    3988 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 08:22:40.333557    3988 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 08:22:40.333557    3988 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 08:22:40.333557    3988 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 08:22:40.334571    3988 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 08:22:40.334571    3988 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 08:22:40.334571    3988 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1014 08:22:40.334571    3988 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 08:22:40.334571    3988 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1014 08:22:40.334571    3988 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 08:22:40.334571    3988 kubeadm.go:310] 
	I1014 08:22:40.334571    3988 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1014 08:22:40.334571    3988 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 08:22:40.334571    3988 kubeadm.go:310] 
	I1014 08:22:40.334571    3988 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 08:22:40.334571    3988 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1014 08:22:40.334571    3988 kubeadm.go:310] 
	I1014 08:22:40.335569    3988 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1014 08:22:40.335569    3988 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 08:22:40.335569    3988 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 08:22:40.335569    3988 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 08:22:40.335569    3988 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 08:22:40.335569    3988 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 08:22:40.335569    3988 kubeadm.go:310] 
	I1014 08:22:40.335569    3988 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 08:22:40.335569    3988 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1014 08:22:40.335569    3988 kubeadm.go:310] 
	I1014 08:22:40.335569    3988 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 08:22:40.335569    3988 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 08:22:40.335569    3988 kubeadm.go:310] 
	I1014 08:22:40.335569    3988 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1014 08:22:40.335569    3988 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 08:22:40.335569    3988 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 08:22:40.335569    3988 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 08:22:40.336549    3988 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 08:22:40.336549    3988 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 08:22:40.336549    3988 kubeadm.go:310] 
	I1014 08:22:40.336549    3988 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1014 08:22:40.336549    3988 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 08:22:40.336549    3988 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1014 08:22:40.336549    3988 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 08:22:40.336549    3988 kubeadm.go:310] 
	I1014 08:22:40.336549    3988 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token u8ls8j.ymqiz2ofkyw9gyoj \
	I1014 08:22:40.336549    3988 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token u8ls8j.ymqiz2ofkyw9gyoj \
	I1014 08:22:40.337565    3988 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f876f951efbe7f1ed47ca578ee0d33e6ce8fe69c1a008a8154ee00f56ac84e46 \
	I1014 08:22:40.337565    3988 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f876f951efbe7f1ed47ca578ee0d33e6ce8fe69c1a008a8154ee00f56ac84e46 \
	I1014 08:22:40.337565    3988 kubeadm.go:310] 	--control-plane 
	I1014 08:22:40.337565    3988 command_runner.go:130] > 	--control-plane 
	I1014 08:22:40.337565    3988 kubeadm.go:310] 
	I1014 08:22:40.337565    3988 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1014 08:22:40.337565    3988 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 08:22:40.337565    3988 kubeadm.go:310] 
	I1014 08:22:40.337565    3988 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token u8ls8j.ymqiz2ofkyw9gyoj \
	I1014 08:22:40.337565    3988 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token u8ls8j.ymqiz2ofkyw9gyoj \
	I1014 08:22:40.337565    3988 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:f876f951efbe7f1ed47ca578ee0d33e6ce8fe69c1a008a8154ee00f56ac84e46 
	I1014 08:22:40.337565    3988 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:f876f951efbe7f1ed47ca578ee0d33e6ce8fe69c1a008a8154ee00f56ac84e46 
	I1014 08:22:40.338551    3988 cni.go:84] Creating CNI manager for ""
	I1014 08:22:40.338551    3988 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 08:22:40.341594    3988 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1014 08:22:40.353591    3988 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 08:22:40.362172    3988 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1014 08:22:40.362244    3988 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I1014 08:22:40.362244    3988 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I1014 08:22:40.362244    3988 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1014 08:22:40.362244    3988 command_runner.go:130] > Access: 2024-10-14 15:20:50.564114000 +0000
	I1014 08:22:40.362244    3988 command_runner.go:130] > Modify: 2024-10-08 16:10:48.000000000 +0000
	I1014 08:22:40.362244    3988 command_runner.go:130] > Change: 2024-10-14 08:20:42.125000000 +0000
	I1014 08:22:40.362244    3988 command_runner.go:130] >  Birth: -
	I1014 08:22:40.362362    3988 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1014 08:22:40.362463    3988 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1014 08:22:40.408764    3988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1014 08:22:41.031487    3988 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1014 08:22:41.031487    3988 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1014 08:22:41.031487    3988 command_runner.go:130] > serviceaccount/kindnet created
	I1014 08:22:41.031487    3988 command_runner.go:130] > daemonset.apps/kindnet created
	I1014 08:22:41.031700    3988 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 08:22:41.045037    3988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 08:22:41.048001    3988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-671000 minikube.k8s.io/updated_at=2024_10_14T08_22_41_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=multinode-671000 minikube.k8s.io/primary=true
	I1014 08:22:41.070728    3988 command_runner.go:130] > -16
	I1014 08:22:41.070797    3988 ops.go:34] apiserver oom_adj: -16
	I1014 08:22:41.246038    3988 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1014 08:22:41.248874    3988 command_runner.go:130] > node/multinode-671000 labeled
	I1014 08:22:41.260223    3988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 08:22:41.370437    3988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1014 08:22:41.762103    3988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 08:22:41.875333    3988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1014 08:22:42.261877    3988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 08:22:42.374800    3988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1014 08:22:42.760796    3988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 08:22:42.886152    3988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1014 08:22:43.260023    3988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 08:22:43.378784    3988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1014 08:22:43.759720    3988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 08:22:43.863481    3988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1014 08:22:44.259257    3988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 08:22:44.375586    3988 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1014 08:22:44.759248    3988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 08:22:44.942575    3988 command_runner.go:130] > NAME      SECRETS   AGE
	I1014 08:22:44.942674    3988 command_runner.go:130] > default   0         0s
	I1014 08:22:44.942782    3988 kubeadm.go:1113] duration metric: took 3.9109097s to wait for elevateKubeSystemPrivileges
	I1014 08:22:44.942909    3988 kubeadm.go:394] duration metric: took 17.4240533s to StartCluster
	I1014 08:22:44.942909    3988 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 08:22:44.943272    3988 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 08:22:44.944894    3988 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 08:22:44.946475    3988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 08:22:44.946572    3988 start.go:235] Will wait 6m0s for node &{Name: IP:172.20.100.167 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 08:22:44.946572    3988 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 08:22:44.946773    3988 addons.go:69] Setting storage-provisioner=true in profile "multinode-671000"
	I1014 08:22:44.946868    3988 addons.go:69] Setting default-storageclass=true in profile "multinode-671000"
	I1014 08:22:44.946868    3988 addons.go:234] Setting addon storage-provisioner=true in "multinode-671000"
	I1014 08:22:44.946995    3988 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-671000"
	I1014 08:22:44.947187    3988 config.go:182] Loaded profile config "multinode-671000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 08:22:44.947220    3988 host.go:66] Checking if "multinode-671000" exists ...
	I1014 08:22:44.948292    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:22:44.949898    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:22:44.951782    3988 out.go:177] * Verifying Kubernetes components...
	I1014 08:22:44.971237    3988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:22:45.295391    3988 command_runner.go:130] > apiVersion: v1
	I1014 08:22:45.295501    3988 command_runner.go:130] > data:
	I1014 08:22:45.295583    3988 command_runner.go:130] >   Corefile: |
	I1014 08:22:45.295583    3988 command_runner.go:130] >     .:53 {
	I1014 08:22:45.295583    3988 command_runner.go:130] >         errors
	I1014 08:22:45.295583    3988 command_runner.go:130] >         health {
	I1014 08:22:45.295583    3988 command_runner.go:130] >            lameduck 5s
	I1014 08:22:45.295660    3988 command_runner.go:130] >         }
	I1014 08:22:45.295660    3988 command_runner.go:130] >         ready
	I1014 08:22:45.295660    3988 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1014 08:22:45.295660    3988 command_runner.go:130] >            pods insecure
	I1014 08:22:45.295750    3988 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1014 08:22:45.295750    3988 command_runner.go:130] >            ttl 30
	I1014 08:22:45.295822    3988 command_runner.go:130] >         }
	I1014 08:22:45.295822    3988 command_runner.go:130] >         prometheus :9153
	I1014 08:22:45.295822    3988 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1014 08:22:45.295900    3988 command_runner.go:130] >            max_concurrent 1000
	I1014 08:22:45.295928    3988 command_runner.go:130] >         }
	I1014 08:22:45.295928    3988 command_runner.go:130] >         cache 30
	I1014 08:22:45.295992    3988 command_runner.go:130] >         loop
	I1014 08:22:45.296016    3988 command_runner.go:130] >         reload
	I1014 08:22:45.296016    3988 command_runner.go:130] >         loadbalance
	I1014 08:22:45.296016    3988 command_runner.go:130] >     }
	I1014 08:22:45.296016    3988 command_runner.go:130] > kind: ConfigMap
	I1014 08:22:45.296016    3988 command_runner.go:130] > metadata:
	I1014 08:22:45.296016    3988 command_runner.go:130] >   creationTimestamp: "2024-10-14T15:22:39Z"
	I1014 08:22:45.296016    3988 command_runner.go:130] >   name: coredns
	I1014 08:22:45.296108    3988 command_runner.go:130] >   namespace: kube-system
	I1014 08:22:45.296108    3988 command_runner.go:130] >   resourceVersion: "268"
	I1014 08:22:45.296108    3988 command_runner.go:130] >   uid: 74dc9d9b-2ab0-4809-b257-ef7a589222af
	I1014 08:22:45.296409    3988 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.20.96.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 08:22:45.415111    3988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 08:22:45.853842    3988 command_runner.go:130] > configmap/coredns replaced
	I1014 08:22:45.853842    3988 start.go:971] {"host.minikube.internal": 172.20.96.1} host record injected into CoreDNS's ConfigMap
	I1014 08:22:45.854844    3988 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 08:22:45.854844    3988 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 08:22:45.855897    3988 kapi.go:59] client config for multinode-671000: &rest.Config{Host:"https://172.20.100.167:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-671000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-671000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2926ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 08:22:45.855897    3988 kapi.go:59] client config for multinode-671000: &rest.Config{Host:"https://172.20.100.167:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-671000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-671000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2926ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 08:22:45.856848    3988 cert_rotation.go:140] Starting client certificate rotation controller
	I1014 08:22:45.857852    3988 node_ready.go:35] waiting up to 6m0s for node "multinode-671000" to be "Ready" ...
	I1014 08:22:45.857852    3988 round_trippers.go:463] GET https://172.20.100.167:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1014 08:22:45.857852    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:45.857852    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:22:45.857852    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:45.857852    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:45.857852    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:45.857852    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:45.857852    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:45.889543    3988 round_trippers.go:574] Response Status: 200 OK in 31 milliseconds
	I1014 08:22:45.889543    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:45.889543    3988 round_trippers.go:580]     Audit-Id: afa33d77-2b03-4cd5-93a5-760b2eadcb32
	I1014 08:22:45.889543    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:45.889543    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:45.889543    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:45.889543    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:45.889543    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:45 GMT
	I1014 08:22:45.889786    3988 round_trippers.go:574] Response Status: 200 OK in 31 milliseconds
	I1014 08:22:45.889786    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:45.889786    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:45.889786    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:45.889786    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:45.889786    3988 round_trippers.go:580]     Content-Length: 291
	I1014 08:22:45.889786    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:45 GMT
	I1014 08:22:45.889786    3988 round_trippers.go:580]     Audit-Id: 45d2973f-fe0a-47f0-a8b9-274cd113df75
	I1014 08:22:45.889786    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:45.889786    3988 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e6e642fc-659f-4393-9291-3315d8127407","resourceVersion":"382","creationTimestamp":"2024-10-14T15:22:39Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1014 08:22:45.889786    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:22:45.890790    3988 request.go:1351] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e6e642fc-659f-4393-9291-3315d8127407","resourceVersion":"382","creationTimestamp":"2024-10-14T15:22:39Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1014 08:22:45.890790    3988 round_trippers.go:463] PUT https://172.20.100.167:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1014 08:22:45.890790    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:45.890790    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:45.890790    3988 round_trippers.go:473]     Content-Type: application/json
	I1014 08:22:45.890790    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:45.908203    3988 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1014 08:22:45.908315    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:45.908315    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:45.908315    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:45.908392    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:45.908392    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:45.908392    3988 round_trippers.go:580]     Content-Length: 291
	I1014 08:22:45.908445    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:45 GMT
	I1014 08:22:45.908492    3988 round_trippers.go:580]     Audit-Id: 4e53bff0-f96a-4e2a-b5db-22e71432fd66
	I1014 08:22:45.908492    3988 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e6e642fc-659f-4393-9291-3315d8127407","resourceVersion":"384","creationTimestamp":"2024-10-14T15:22:39Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1014 08:22:46.358626    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:22:46.358626    3988 round_trippers.go:463] GET https://172.20.100.167:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1014 08:22:46.358626    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:46.358626    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:46.358626    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:46.358626    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:46.358626    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:46.358626    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:46.362601    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:22:46.362601    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:46.362601    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:46.362601    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:46.362601    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:46.362601    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:46.362601    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:46 GMT
	I1014 08:22:46.362601    3988 round_trippers.go:580]     Audit-Id: 33a77c9b-38a0-4f33-8213-94fee6a886fb
	I1014 08:22:46.362601    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:22:46.362601    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:46.362601    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:46.362601    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:46.362601    3988 round_trippers.go:580]     Content-Length: 291
	I1014 08:22:46.362601    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:46 GMT
	I1014 08:22:46.362601    3988 round_trippers.go:580]     Audit-Id: 51e1e402-2b98-48ba-a9aa-c0c394f18c31
	I1014 08:22:46.362601    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:46.362601    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:46.362601    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:22:46.362601    3988 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"e6e642fc-659f-4393-9291-3315d8127407","resourceVersion":"394","creationTimestamp":"2024-10-14T15:22:39Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1014 08:22:46.363600    3988 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-671000" context rescaled to 1 replicas
	I1014 08:22:46.859391    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:22:46.859391    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:46.859391    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:46.859391    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:46.863923    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:22:46.863923    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:46.863923    3988 round_trippers.go:580]     Audit-Id: 350c1c4e-721c-4228-aea0-aaf8995d8e42
	I1014 08:22:46.863923    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:46.863923    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:46.863923    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:46.864075    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:46.864075    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:46 GMT
	I1014 08:22:46.864247    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:22:47.199818    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:22:47.199818    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:22:47.199818    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:22:47.199818    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:22:47.200823    3988 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 08:22:47.200823    3988 kapi.go:59] client config for multinode-671000: &rest.Config{Host:"https://172.20.100.167:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-671000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-671000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2926ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 08:22:47.201813    3988 addons.go:234] Setting addon default-storageclass=true in "multinode-671000"
	I1014 08:22:47.201813    3988 host.go:66] Checking if "multinode-671000" exists ...
	I1014 08:22:47.203816    3988 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 08:22:47.204809    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:22:47.207819    3988 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 08:22:47.207819    3988 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 08:22:47.207819    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:22:47.358070    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:22:47.358070    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:47.358070    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:47.358070    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:47.386813    3988 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I1014 08:22:47.386813    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:47.386813    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:47.386813    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:47.386813    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:47 GMT
	I1014 08:22:47.386813    3988 round_trippers.go:580]     Audit-Id: 84598af0-1bca-499b-b953-13f1200ad5da
	I1014 08:22:47.386813    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:47.386813    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:47.386813    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:22:47.858033    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:22:47.858033    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:47.858033    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:47.858033    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:47.862035    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:22:47.862035    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:47.862035    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:47.862035    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:47.862035    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:47.862035    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:47 GMT
	I1014 08:22:47.862035    3988 round_trippers.go:580]     Audit-Id: c1fd0cb0-c6af-4419-9612-aa23cadec560
	I1014 08:22:47.862035    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:47.863203    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:22:47.864161    3988 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:22:48.358139    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:22:48.358139    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:48.358139    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:48.358139    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:48.362261    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:22:48.362371    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:48.362462    3988 round_trippers.go:580]     Audit-Id: 788b988a-381f-49b4-ab64-28f45cf760b4
	I1014 08:22:48.362462    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:48.362462    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:48.362462    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:48.362462    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:48.362532    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:48 GMT
	I1014 08:22:48.363362    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:22:48.858612    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:22:48.858612    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:48.858612    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:48.858612    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:48.862182    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:22:48.862252    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:48.862252    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:48.862252    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:48.862252    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:48.862252    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:48 GMT
	I1014 08:22:48.862252    3988 round_trippers.go:580]     Audit-Id: 3fa8cfb8-41c8-41d8-9963-86fefee27156
	I1014 08:22:48.862252    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:48.862252    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:22:49.358437    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:22:49.358437    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:49.358437    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:49.358437    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:49.361707    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:22:49.361785    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:49.361785    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:49 GMT
	I1014 08:22:49.361785    3988 round_trippers.go:580]     Audit-Id: 5124d55e-699e-4433-affa-650c6008ee48
	I1014 08:22:49.361785    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:49.361785    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:49.361785    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:49.361785    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:49.362644    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:22:49.558715    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:22:49.559531    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:22:49.559645    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:22:49.586759    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:22:49.587476    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:22:49.587539    3988 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 08:22:49.587539    3988 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 08:22:49.587539    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:22:49.858120    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:22:49.858120    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:49.858120    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:49.858120    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:49.863370    3988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:22:49.863525    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:49.863525    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:49.863613    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:49.863613    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:49.863613    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:49.863613    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:49 GMT
	I1014 08:22:49.863686    3988 round_trippers.go:580]     Audit-Id: 72609e5b-f45c-48dd-afec-ad33ccbd110f
	I1014 08:22:49.863930    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:22:49.864558    3988 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:22:50.357970    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:22:50.357970    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:50.357970    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:50.357970    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:50.361853    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:22:50.361853    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:50.361853    3988 round_trippers.go:580]     Audit-Id: d817d93b-82bf-4b64-b75d-de031b70b4e0
	I1014 08:22:50.361853    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:50.361853    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:50.361853    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:50.361853    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:50.361853    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:50 GMT
	I1014 08:22:50.362396    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:22:50.859061    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:22:50.859134    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:50.859134    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:50.859134    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:50.862496    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:22:50.862496    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:50.862595    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:50.862595    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:50 GMT
	I1014 08:22:50.862595    3988 round_trippers.go:580]     Audit-Id: a1e99110-a918-4532-91fa-620170da80ee
	I1014 08:22:50.862595    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:50.862595    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:50.862595    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:50.863220    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:22:51.358216    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:22:51.358216    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:51.358216    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:51.358216    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:51.361213    3988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:22:51.361213    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:51.361213    3988 round_trippers.go:580]     Audit-Id: 09e8993f-c703-4ce4-b6df-675f4bf3679f
	I1014 08:22:51.361213    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:51.361213    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:51.361213    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:51.361213    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:51.361213    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:51 GMT
	I1014 08:22:51.362516    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:22:51.796357    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:22:51.796357    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:22:51.797143    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:22:51.858590    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:22:51.858590    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:51.858590    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:51.858590    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:51.862439    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:22:51.862511    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:51.862592    3988 round_trippers.go:580]     Audit-Id: 62eb62de-a988-491a-ad79-9daffeda8259
	I1014 08:22:51.862592    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:51.862592    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:51.862592    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:51.862592    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:51.862592    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:51 GMT
	I1014 08:22:51.862899    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:22:52.194754    3988 main.go:141] libmachine: [stdout =====>] : 172.20.100.167
	
	I1014 08:22:52.194754    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:22:52.194754    3988 sshutil.go:53] new ssh client: &{IP:172.20.100.167 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000\id_rsa Username:docker}
	I1014 08:22:52.342192    3988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 08:22:52.358259    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:22:52.358259    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:52.358259    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:52.358259    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:52.362281    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:22:52.362350    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:52.362350    3988 round_trippers.go:580]     Audit-Id: f254cac0-05fa-4181-a5e2-a305e0394c3a
	I1014 08:22:52.362418    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:52.362418    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:52.362418    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:52.362418    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:52.362493    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:52 GMT
	I1014 08:22:52.362976    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:22:52.363610    3988 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:22:52.858725    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:22:52.858725    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:52.858725    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:52.858725    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:52.863897    3988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:22:52.863974    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:52.863974    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:52.863974    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:52 GMT
	I1014 08:22:52.863974    3988 round_trippers.go:580]     Audit-Id: 190960d6-c1a9-47f5-93d3-1e28a40d7df9
	I1014 08:22:52.863974    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:52.863974    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:52.863974    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:52.864085    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:22:52.935192    3988 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1014 08:22:52.935327    3988 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1014 08:22:52.935327    3988 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1014 08:22:52.935327    3988 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1014 08:22:52.935327    3988 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1014 08:22:52.935405    3988 command_runner.go:130] > pod/storage-provisioner created
	I1014 08:22:53.358862    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:22:53.358862    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:53.359404    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:53.359404    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:53.365811    3988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:22:53.365811    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:53.365811    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:53.365811    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:53.365811    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:53.365811    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:53 GMT
	I1014 08:22:53.365811    3988 round_trippers.go:580]     Audit-Id: 0e359991-4a2d-4582-be0d-84b6da929f02
	I1014 08:22:53.365811    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:53.365811    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:22:53.858133    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:22:53.858133    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:53.858133    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:53.858133    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:53.863469    3988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:22:53.863533    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:53.863533    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:53.863533    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:53 GMT
	I1014 08:22:53.863533    3988 round_trippers.go:580]     Audit-Id: 33f61efd-f2b2-4f92-aa8d-ad3875ff2470
	I1014 08:22:53.863533    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:53.863533    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:53.863645    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:53.863958    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:22:54.358837    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:22:54.358837    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:54.358837    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:54.358837    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:54.363964    3988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:22:54.363964    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:54.364092    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:54 GMT
	I1014 08:22:54.364092    3988 round_trippers.go:580]     Audit-Id: 3abffd38-a5a6-4040-9fcf-a7880c0b254d
	I1014 08:22:54.364092    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:54.364092    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:54.364092    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:54.364092    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:54.364326    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:22:54.365192    3988 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:22:54.380032    3988 main.go:141] libmachine: [stdout =====>] : 172.20.100.167
	
	I1014 08:22:54.380372    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:22:54.380468    3988 sshutil.go:53] new ssh client: &{IP:172.20.100.167 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000\id_rsa Username:docker}
	I1014 08:22:54.521938    3988 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 08:22:54.715121    3988 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1014 08:22:54.715121    3988 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 08:22:54.715121    3988 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 08:22:54.716075    3988 round_trippers.go:463] GET https://172.20.100.167:8443/apis/storage.k8s.io/v1/storageclasses
	I1014 08:22:54.716075    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:54.716075    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:54.716199    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:54.721280    3988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:22:54.721280    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:54.721280    3988 round_trippers.go:580]     Content-Length: 1273
	I1014 08:22:54.721280    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:54 GMT
	I1014 08:22:54.721383    3988 round_trippers.go:580]     Audit-Id: 6574970a-6c30-4bd9-967d-75c7484c6b20
	I1014 08:22:54.721383    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:54.721383    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:54.721383    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:54.721383    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:54.721383    3988 request.go:1351] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"424"},"items":[{"metadata":{"name":"standard","uid":"5ce362d0-96e8-4141-9bb2-83c6000d8cc5","resourceVersion":"424","creationTimestamp":"2024-10-14T15:22:54Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-10-14T15:22:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1014 08:22:54.722106    3988 request.go:1351] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5ce362d0-96e8-4141-9bb2-83c6000d8cc5","resourceVersion":"424","creationTimestamp":"2024-10-14T15:22:54Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-10-14T15:22:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1014 08:22:54.722205    3988 round_trippers.go:463] PUT https://172.20.100.167:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1014 08:22:54.722205    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:54.722205    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:54.722205    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:54.722292    3988 round_trippers.go:473]     Content-Type: application/json
	I1014 08:22:54.726182    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:22:54.726182    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:54.726182    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:54.726182    3988 round_trippers.go:580]     Content-Length: 1220
	I1014 08:22:54.726182    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:54 GMT
	I1014 08:22:54.726182    3988 round_trippers.go:580]     Audit-Id: 588330b8-a6db-4efc-9814-eda4af845402
	I1014 08:22:54.726182    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:54.726182    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:54.726182    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:54.726182    3988 request.go:1351] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5ce362d0-96e8-4141-9bb2-83c6000d8cc5","resourceVersion":"424","creationTimestamp":"2024-10-14T15:22:54Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-10-14T15:22:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1014 08:22:54.730021    3988 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1014 08:22:54.732706    3988 addons.go:510] duration metric: took 9.7861184s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1014 08:22:54.858228    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:22:54.858673    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:54.858673    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:54.858673    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:54.862541    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:22:54.862659    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:54.862659    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:54.862659    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:54.862659    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:54.862659    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:54.862659    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:54 GMT
	I1014 08:22:54.862760    3988 round_trippers.go:580]     Audit-Id: b2621509-00e0-49cb-aeb0-4732fbad78d5
	I1014 08:22:54.862911    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:22:55.358607    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:22:55.358607    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:55.358607    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:55.358607    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:55.366223    3988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 08:22:55.366223    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:55.366223    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:55.366223    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:55.366223    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:55.366223    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:55.366223    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:55 GMT
	I1014 08:22:55.366223    3988 round_trippers.go:580]     Audit-Id: c8c6bbbb-4942-4995-834c-e1d21cf9bc9f
	I1014 08:22:55.366223    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:22:55.858872    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:22:55.858949    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:55.858949    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:55.858949    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:55.863172    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:22:55.863172    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:55.863172    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:55.863172    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:55.863172    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:55.863298    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:55 GMT
	I1014 08:22:55.863298    3988 round_trippers.go:580]     Audit-Id: 21c1b0bb-2fef-4c22-a8fc-7184160876dd
	I1014 08:22:55.863298    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:55.863542    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:22:56.358584    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:22:56.358584    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:56.358584    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:56.358720    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:56.362578    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:22:56.362578    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:56.363125    3988 round_trippers.go:580]     Audit-Id: d59d9922-d53b-442a-b0b1-26284e154aae
	I1014 08:22:56.363125    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:56.363125    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:56.363125    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:56.363125    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:56.363125    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:56 GMT
	I1014 08:22:56.363436    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:22:56.857986    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:22:56.857986    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:56.857986    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:56.857986    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:56.863331    3988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:22:56.863422    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:56.863422    3988 round_trippers.go:580]     Audit-Id: af6da5af-da26-4e85-b5c3-22c9b8176b18
	I1014 08:22:56.863517    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:56.863517    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:56.863517    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:56.863552    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:56.863552    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:56 GMT
	I1014 08:22:56.863802    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:22:56.864400    3988 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:22:57.358765    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:22:57.358765    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:57.358765    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:57.358765    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:57.363944    3988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:22:57.364018    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:57.364018    3988 round_trippers.go:580]     Audit-Id: 09444a14-7706-4b45-bfc9-0ec44d518ce3
	I1014 08:22:57.364018    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:57.364104    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:57.364104    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:57.364104    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:57.364104    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:57 GMT
	I1014 08:22:57.365024    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:22:57.858772    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:22:57.858772    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:57.858772    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:57.858772    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:57.864915    3988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:22:57.865412    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:57.865412    3988 round_trippers.go:580]     Audit-Id: c70f31e9-f04b-4c7d-a36b-e2890ccff66d
	I1014 08:22:57.865412    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:57.865412    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:57.865412    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:57.865412    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:57.865412    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:57 GMT
	I1014 08:22:57.865652    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:22:58.358413    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:22:58.358413    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:58.358413    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:58.358413    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:58.363990    3988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:22:58.364059    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:58.364059    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:58.364059    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:58 GMT
	I1014 08:22:58.364059    3988 round_trippers.go:580]     Audit-Id: b2069131-265c-459f-87e9-31bfe054d075
	I1014 08:22:58.364059    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:58.364059    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:58.364059    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:58.364337    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:22:58.857981    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:22:58.857981    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:58.857981    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:58.857981    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:58.862409    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:22:58.863351    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:58.863421    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:58.863421    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:58.863421    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:58.863421    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:58.863421    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:58 GMT
	I1014 08:22:58.863421    3988 round_trippers.go:580]     Audit-Id: c942a478-c313-40e8-beae-8f5513fe0ddd
	I1014 08:22:58.863556    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:22:59.359099    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:22:59.359811    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:59.359811    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:59.359811    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:59.365141    3988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:22:59.365315    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:59.365337    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:59.365337    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:59.365337    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:59.365337    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:59.365337    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:59 GMT
	I1014 08:22:59.365337    3988 round_trippers.go:580]     Audit-Id: fa606ec2-b825-4d3f-bb4c-78948d8d2948
	I1014 08:22:59.365554    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:22:59.366097    3988 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:22:59.859137    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:22:59.859137    3988 round_trippers.go:469] Request Headers:
	I1014 08:22:59.859137    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:22:59.859137    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:22:59.864372    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:22:59.864372    3988 round_trippers.go:577] Response Headers:
	I1014 08:22:59.864372    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:22:59.864487    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:22:59.864487    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:22:59.864487    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:22:59.864487    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:22:59 GMT
	I1014 08:22:59.864588    3988 round_trippers.go:580]     Audit-Id: b4d58b78-f93b-4210-9d41-eb3ff15a68ea
	I1014 08:22:59.864826    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:23:00.357969    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:23:00.357969    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:00.357969    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:00.357969    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:00.362855    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:23:00.362983    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:00.362983    3988 round_trippers.go:580]     Audit-Id: 9c8b0af9-e146-44b1-870a-436841dbff63
	I1014 08:23:00.362983    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:00.362983    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:00.362983    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:00.362983    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:00.362983    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:00 GMT
	I1014 08:23:00.363520    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:23:00.858018    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:23:00.858018    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:00.858018    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:00.858018    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:00.862399    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:23:00.862495    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:00.862495    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:00 GMT
	I1014 08:23:00.862495    3988 round_trippers.go:580]     Audit-Id: 6b617de9-878d-41b7-9791-cec305525d77
	I1014 08:23:00.862575    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:00.862575    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:00.862575    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:00.862575    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:00.862938    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:23:01.357956    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:23:01.357956    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:01.357956    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:01.357956    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:01.363736    3988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:23:01.363840    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:01.363840    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:01.363905    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:01 GMT
	I1014 08:23:01.363905    3988 round_trippers.go:580]     Audit-Id: 71ad9ccf-6fd5-4b29-8c78-db95d8d53b88
	I1014 08:23:01.363905    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:01.363905    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:01.363905    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:01.364392    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:23:01.861074    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:23:01.861074    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:01.861074    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:01.861074    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:01.865093    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:23:01.865093    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:01.865093    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:01 GMT
	I1014 08:23:01.865093    3988 round_trippers.go:580]     Audit-Id: fd7c31f1-25cc-4bab-bae8-e0e410c14cd8
	I1014 08:23:01.865093    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:01.865093    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:01.865093    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:01.865093    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:01.865482    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:23:01.866357    3988 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:23:02.359401    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:23:02.359401    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:02.359401    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:02.359401    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:02.363549    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:23:02.363549    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:02.363549    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:02 GMT
	I1014 08:23:02.363549    3988 round_trippers.go:580]     Audit-Id: ab77dc87-a3c9-4d85-b1e6-8a4a108d3833
	I1014 08:23:02.363662    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:02.363662    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:02.363662    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:02.363662    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:02.363862    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:23:02.858651    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:23:02.858651    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:02.858651    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:02.858651    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:02.863880    3988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:23:02.863952    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:02.863952    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:02.863952    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:02 GMT
	I1014 08:23:02.863952    3988 round_trippers.go:580]     Audit-Id: ebfe346c-a647-4912-b5b5-7f15557dc05f
	I1014 08:23:02.864042    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:02.864042    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:02.864042    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:02.864339    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:23:03.358321    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:23:03.358321    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:03.358321    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:03.358321    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:03.363841    3988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:23:03.363841    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:03.363841    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:03.363841    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:03.363841    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:03.363841    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:03 GMT
	I1014 08:23:03.364078    3988 round_trippers.go:580]     Audit-Id: 69bfd34e-727d-4dc7-9b82-11c74eed8094
	I1014 08:23:03.364078    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:03.364502    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:23:03.859138    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:23:03.859138    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:03.859332    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:03.859332    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:03.863692    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:23:03.863692    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:03.863692    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:03.863692    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:03.863692    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:03.863692    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:03.863692    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:03 GMT
	I1014 08:23:03.863692    3988 round_trippers.go:580]     Audit-Id: 605c0d94-174f-4983-96c8-374e728bd3b4
	I1014 08:23:03.863692    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:23:04.358435    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:23:04.358435    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:04.358435    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:04.358435    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:04.362490    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:23:04.362490    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:04.362490    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:04.362490    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:04.362490    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:04.362490    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:04.362490    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:04 GMT
	I1014 08:23:04.362490    3988 round_trippers.go:580]     Audit-Id: bb28aa56-f1ee-45c7-803d-f6ceed249934
	I1014 08:23:04.363744    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:23:04.364287    3988 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:23:04.859115    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:23:04.859115    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:04.859250    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:04.859250    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:04.864464    3988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:23:04.864596    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:04.864596    3988 round_trippers.go:580]     Audit-Id: 1acf9000-81d6-40a9-abf1-b3e11ef643c5
	I1014 08:23:04.864596    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:04.864596    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:04.864596    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:04.864596    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:04.864596    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:04 GMT
	I1014 08:23:04.864922    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:23:05.358988    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:23:05.359146    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:05.359146    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:05.359146    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:05.363790    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:23:05.363895    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:05.363895    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:05.363895    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:05 GMT
	I1014 08:23:05.363895    3988 round_trippers.go:580]     Audit-Id: 8e6af442-fcde-42ba-81fe-01f6aad429a3
	I1014 08:23:05.363895    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:05.363971    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:05.363971    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:05.364353    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:23:05.858584    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:23:05.859127    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:05.859127    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:05.859127    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:05.863528    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:23:05.863528    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:05.863528    3988 round_trippers.go:580]     Audit-Id: 7f2f31f6-1dcd-4cf9-b2cd-5027de99d08c
	I1014 08:23:05.863528    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:05.863528    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:05.863528    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:05.863528    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:05.863528    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:05 GMT
	I1014 08:23:05.863870    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:23:06.358952    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:23:06.358952    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:06.358952    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:06.359134    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:06.362451    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:23:06.362806    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:06.362894    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:06.362894    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:06.362894    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:06.362894    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:06 GMT
	I1014 08:23:06.362894    3988 round_trippers.go:580]     Audit-Id: d3270b3d-fe40-4b9f-b8e9-ee87714732a3
	I1014 08:23:06.362894    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:06.363239    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"336","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I1014 08:23:06.858184    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:23:06.858184    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:06.858184    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:06.858184    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:06.866192    3988 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1014 08:23:06.866192    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:06.866192    3988 round_trippers.go:580]     Audit-Id: a31e91c0-c3ff-476e-a1bc-4c40b25ba990
	I1014 08:23:06.866192    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:06.866192    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:06.866192    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:06.866192    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:06.866192    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:06 GMT
	I1014 08:23:06.866192    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"430","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I1014 08:23:06.867193    3988 node_ready.go:49] node "multinode-671000" has status "Ready":"True"
	I1014 08:23:06.867193    3988 node_ready.go:38] duration metric: took 21.0093089s for node "multinode-671000" to be "Ready" ...
	I1014 08:23:06.867193    3988 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 08:23:06.867193    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/namespaces/kube-system/pods
	I1014 08:23:06.867193    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:06.867193    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:06.867193    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:06.874221    3988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 08:23:06.874221    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:06.874221    3988 round_trippers.go:580]     Audit-Id: 8b715e93-ee9f-41fb-aa17-dcdce18bff7a
	I1014 08:23:06.875109    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:06.875109    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:06.875109    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:06.875109    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:06.875109    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:06 GMT
	I1014 08:23:06.877067    3988 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"436"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"436","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57862 chars]
	I1014 08:23:06.881161    3988 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace to be "Ready" ...
	I1014 08:23:06.882149    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:23:06.882149    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:06.882149    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:06.882149    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:06.885452    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:23:06.885540    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:06.885540    3988 round_trippers.go:580]     Audit-Id: c8dbd166-8416-4fa6-9b1e-8837676e3d57
	I1014 08:23:06.885540    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:06.885540    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:06.885627    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:06.885627    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:06.885627    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:06 GMT
	I1014 08:23:06.885847    3988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"436","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6705 chars]
	I1014 08:23:06.886159    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:23:06.886159    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:06.886159    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:06.886159    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:06.889398    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:23:06.889482    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:06.889482    3988 round_trippers.go:580]     Audit-Id: d682218e-aa16-4861-b1b2-63539e2de274
	I1014 08:23:06.889482    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:06.889482    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:06.889482    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:06.889551    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:06.889551    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:06 GMT
	I1014 08:23:06.889730    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"430","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I1014 08:23:07.382766    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:23:07.382766    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:07.382766    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:07.382766    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:07.386829    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:23:07.386829    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:07.386829    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:07.386829    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:07.386829    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:07.386829    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:07.386829    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:07 GMT
	I1014 08:23:07.386829    3988 round_trippers.go:580]     Audit-Id: 05c2dbe6-8ab7-4101-9896-ec3e01213c48
	I1014 08:23:07.386829    3988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"436","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6705 chars]
	I1014 08:23:07.387814    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:23:07.387814    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:07.387814    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:07.388767    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:07.390789    3988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:23:07.390789    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:07.390789    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:07.390789    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:07 GMT
	I1014 08:23:07.390789    3988 round_trippers.go:580]     Audit-Id: 8fdecab1-3073-4bee-a8b0-ae9e2219dfe5
	I1014 08:23:07.390789    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:07.390789    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:07.390789    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:07.390789    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"430","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I1014 08:23:07.883338    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:23:07.883338    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:07.883338    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:07.883338    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:07.889360    3988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:23:07.889360    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:07.889360    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:07.889360    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:07 GMT
	I1014 08:23:07.889360    3988 round_trippers.go:580]     Audit-Id: d23013de-4305-487a-851b-cd65ddfadb4c
	I1014 08:23:07.889360    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:07.889360    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:07.889360    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:07.890349    3988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"436","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6705 chars]
	I1014 08:23:07.890349    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:23:07.890349    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:07.890349    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:07.890349    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:07.893368    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:23:07.894350    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:07.894350    3988 round_trippers.go:580]     Audit-Id: fcfb12af-d37e-4367-b177-8edd7e5a2fc4
	I1014 08:23:07.894350    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:07.894350    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:07.894350    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:07.894350    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:07.894350    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:07 GMT
	I1014 08:23:07.894350    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"430","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I1014 08:23:08.382355    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:23:08.382355    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:08.382355    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:08.382355    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:08.388001    3988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:23:08.388108    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:08.388184    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:08.388184    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:08.388241    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:08 GMT
	I1014 08:23:08.388263    3988 round_trippers.go:580]     Audit-Id: 32aeb894-5ed8-4785-a2d1-1edb21e61724
	I1014 08:23:08.388263    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:08.388263    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:08.388703    3988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"436","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6705 chars]
	I1014 08:23:08.389544    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:23:08.389544    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:08.389544    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:08.389544    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:08.392126    3988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:23:08.392411    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:08.392411    3988 round_trippers.go:580]     Audit-Id: b1b5c080-68a8-48b8-a73e-41fc0e26c1f4
	I1014 08:23:08.392411    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:08.392411    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:08.392411    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:08.392411    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:08.392495    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:08 GMT
	I1014 08:23:08.392633    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"430","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I1014 08:23:08.882732    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:23:08.882732    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:08.882732    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:08.882732    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:08.901341    3988 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I1014 08:23:08.901451    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:08.901451    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:08.901451    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:08 GMT
	I1014 08:23:08.901451    3988 round_trippers.go:580]     Audit-Id: bc2998ee-4a1c-4d91-998c-8e96c287e481
	I1014 08:23:08.901451    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:08.901451    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:08.901451    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:08.902808    3988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"451","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I1014 08:23:08.903764    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:23:08.903830    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:08.903830    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:08.903830    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:08.909738    3988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:23:08.910120    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:08.910120    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:08 GMT
	I1014 08:23:08.910120    3988 round_trippers.go:580]     Audit-Id: 7a0595e9-835e-4312-908e-9fdd61252b90
	I1014 08:23:08.910120    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:08.910120    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:08.910120    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:08.910120    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:08.912189    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"430","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I1014 08:23:08.912755    3988 pod_ready.go:93] pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace has status "Ready":"True"
	I1014 08:23:08.912755    3988 pod_ready.go:82] duration metric: took 2.0315913s for pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace to be "Ready" ...
	I1014 08:23:08.912755    3988 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:23:08.912755    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-671000
	I1014 08:23:08.912755    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:08.912755    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:08.912755    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:08.920146    3988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 08:23:08.920210    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:08.920210    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:08.920210    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:08 GMT
	I1014 08:23:08.920210    3988 round_trippers.go:580]     Audit-Id: 883d6eb5-83af-495e-b589-c2e87334f1a2
	I1014 08:23:08.920210    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:08.920210    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:08.920210    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:08.920490    3988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-671000","namespace":"kube-system","uid":"56dfdf16-1224-41e3-94de-9d7f4021a17d","resourceVersion":"409","creationTimestamp":"2024-10-14T15:22:39Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.100.167:2379","kubernetes.io/config.hash":"778fdb620bffec66f911bf24e3c8210b","kubernetes.io/config.mirror":"778fdb620bffec66f911bf24e3c8210b","kubernetes.io/config.seen":"2024-10-14T15:22:39.775208719Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6476 chars]
	I1014 08:23:08.921220    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:23:08.921220    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:08.921220    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:08.921301    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:08.923952    3988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:23:08.924155    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:08.924155    3988 round_trippers.go:580]     Audit-Id: eea740b6-e2d8-4a45-93e0-727b5afe1f83
	I1014 08:23:08.924155    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:08.924155    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:08.924155    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:08.924236    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:08.924236    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:08 GMT
	I1014 08:23:08.924377    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"430","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I1014 08:23:08.924675    3988 pod_ready.go:93] pod "etcd-multinode-671000" in "kube-system" namespace has status "Ready":"True"
	I1014 08:23:08.924675    3988 pod_ready.go:82] duration metric: took 11.9202ms for pod "etcd-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:23:08.924675    3988 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:23:08.924675    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-671000
	I1014 08:23:08.924675    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:08.924675    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:08.924675    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:08.932686    3988 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1014 08:23:08.932686    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:08.932686    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:08.932686    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:08 GMT
	I1014 08:23:08.932686    3988 round_trippers.go:580]     Audit-Id: 79b0b44b-3050-4f17-8671-f56f184145a1
	I1014 08:23:08.932686    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:08.932686    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:08.932686    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:08.932686    3988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-671000","namespace":"kube-system","uid":"80ea37b8-9db1-4a39-9e9e-51c01edadfb1","resourceVersion":"370","creationTimestamp":"2024-10-14T15:22:39Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.100.167:8443","kubernetes.io/config.hash":"3cf38ccc62eb74f6e658e1f66ae8cab1","kubernetes.io/config.mirror":"3cf38ccc62eb74f6e658e1f66ae8cab1","kubernetes.io/config.seen":"2024-10-14T15:22:39.775211919Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I1014 08:23:08.933705    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:23:08.933705    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:08.933705    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:08.933705    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:08.936681    3988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:23:08.936681    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:08.936681    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:08.936681    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:08.936681    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:08 GMT
	I1014 08:23:08.936681    3988 round_trippers.go:580]     Audit-Id: 18d30c7c-3fbd-47d4-96a5-9c736374824f
	I1014 08:23:08.936681    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:08.936681    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:08.936681    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"430","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I1014 08:23:08.936681    3988 pod_ready.go:93] pod "kube-apiserver-multinode-671000" in "kube-system" namespace has status "Ready":"True"
	I1014 08:23:08.936681    3988 pod_ready.go:82] duration metric: took 12.0058ms for pod "kube-apiserver-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:23:08.937803    3988 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:23:08.937803    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-671000
	I1014 08:23:08.937803    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:08.937803    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:08.937803    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:08.939681    3988 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1014 08:23:08.939681    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:08.939681    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:08.939681    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:08.940682    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:08 GMT
	I1014 08:23:08.940682    3988 round_trippers.go:580]     Audit-Id: cee126fb-bf7b-428e-a0b6-3170c1a0c19b
	I1014 08:23:08.940682    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:08.940682    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:08.940682    3988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-671000","namespace":"kube-system","uid":"a5c7bb80-c844-476f-ba47-1cd4e599b92d","resourceVersion":"406","creationTimestamp":"2024-10-14T15:22:39Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b83e31864e0ae98d29d960866012ecb0","kubernetes.io/config.mirror":"b83e31864e0ae98d29d960866012ecb0","kubernetes.io/config.seen":"2024-10-14T15:22:39.775213119Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I1014 08:23:08.940682    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:23:08.941678    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:08.941678    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:08.941678    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:08.962817    3988 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I1014 08:23:08.962817    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:08.962817    3988 round_trippers.go:580]     Audit-Id: 795ee2b4-f757-4cda-827e-f6ac21790be8
	I1014 08:23:08.962817    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:08.962817    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:08.962817    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:08.962817    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:08.962817    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:08 GMT
	I1014 08:23:08.962817    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"430","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I1014 08:23:08.963364    3988 pod_ready.go:93] pod "kube-controller-manager-multinode-671000" in "kube-system" namespace has status "Ready":"True"
	I1014 08:23:08.963519    3988 pod_ready.go:82] duration metric: took 25.7163ms for pod "kube-controller-manager-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:23:08.963519    3988 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r74dx" in "kube-system" namespace to be "Ready" ...
	I1014 08:23:08.963625    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r74dx
	I1014 08:23:08.963727    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:08.963727    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:08.963759    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:08.965841    3988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:23:08.965841    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:08.965841    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:08.965841    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:08.965841    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:08.965841    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:08 GMT
	I1014 08:23:08.965841    3988 round_trippers.go:580]     Audit-Id: d26a5880-e7d3-4497-9f53-5f8c3bbc0413
	I1014 08:23:08.965841    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:08.965841    3988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-r74dx","generateName":"kube-proxy-","namespace":"kube-system","uid":"f8d14473-8859-4015-84e9-d00656cc00c9","resourceVersion":"402","creationTimestamp":"2024-10-14T15:22:44Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e5217416-1ed2-4ea5-ae9b-ee7bcdba0d2a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e5217416-1ed2-4ea5-ae9b-ee7bcdba0d2a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6199 chars]
	I1014 08:23:08.966905    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:23:08.966905    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:08.966905    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:08.966905    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:08.969870    3988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:23:08.969870    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:08.969870    3988 round_trippers.go:580]     Audit-Id: 6513dca2-3f95-4598-ae61-d38bdca5ba1d
	I1014 08:23:08.969870    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:08.969870    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:08.969870    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:08.969870    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:08.969870    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:08 GMT
	I1014 08:23:08.969870    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"430","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I1014 08:23:08.970854    3988 pod_ready.go:93] pod "kube-proxy-r74dx" in "kube-system" namespace has status "Ready":"True"
	I1014 08:23:08.970854    3988 pod_ready.go:82] duration metric: took 7.335ms for pod "kube-proxy-r74dx" in "kube-system" namespace to be "Ready" ...
	I1014 08:23:08.970854    3988 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:23:09.083274    3988 request.go:632] Waited for 112.4198ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.100.167:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-671000
	I1014 08:23:09.083809    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-671000
	I1014 08:23:09.083809    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:09.083809    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:09.083809    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:09.088680    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:23:09.088680    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:09.088680    3988 round_trippers.go:580]     Audit-Id: 14cdfab5-1051-446e-8942-54a280025793
	I1014 08:23:09.088680    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:09.088680    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:09.088767    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:09.088767    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:09.088767    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:09 GMT
	I1014 08:23:09.088997    3988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-671000","namespace":"kube-system","uid":"97febcab-f54d-4338-ba7c-2dc5e69b77fc","resourceVersion":"421","creationTimestamp":"2024-10-14T15:22:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3e987cfaedc75c39145e8fc131c60c81","kubernetes.io/config.mirror":"3e987cfaedc75c39145e8fc131c60c81","kubernetes.io/config.seen":"2024-10-14T15:22:32.104995089Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I1014 08:23:09.283443    3988 request.go:632] Waited for 193.8546ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:23:09.283443    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:23:09.283443    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:09.283443    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:09.283443    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:09.289126    3988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:23:09.289239    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:09.289239    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:09.289239    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:09.289239    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:09.289239    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:09 GMT
	I1014 08:23:09.289239    3988 round_trippers.go:580]     Audit-Id: 552b2798-c992-4887-bf51-06478afb67b4
	I1014 08:23:09.289312    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:09.289715    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"430","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I1014 08:23:09.290397    3988 pod_ready.go:93] pod "kube-scheduler-multinode-671000" in "kube-system" namespace has status "Ready":"True"
	I1014 08:23:09.290397    3988 pod_ready.go:82] duration metric: took 319.5423ms for pod "kube-scheduler-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:23:09.290467    3988 pod_ready.go:39] duration metric: took 2.423271s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 08:23:09.290527    3988 api_server.go:52] waiting for apiserver process to appear ...
	I1014 08:23:09.301449    3988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 08:23:09.332110    3988 command_runner.go:130] > 2051
	I1014 08:23:09.332110    3988 api_server.go:72] duration metric: took 24.3853525s to wait for apiserver process to appear ...
	I1014 08:23:09.332110    3988 api_server.go:88] waiting for apiserver healthz status ...
	I1014 08:23:09.332228    3988 api_server.go:253] Checking apiserver healthz at https://172.20.100.167:8443/healthz ...
	I1014 08:23:09.340580    3988 api_server.go:279] https://172.20.100.167:8443/healthz returned 200:
	ok
	I1014 08:23:09.340796    3988 round_trippers.go:463] GET https://172.20.100.167:8443/version
	I1014 08:23:09.340796    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:09.340796    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:09.340796    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:09.342428    3988 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1014 08:23:09.342840    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:09.342840    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:09.342840    3988 round_trippers.go:580]     Content-Length: 263
	I1014 08:23:09.342840    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:09 GMT
	I1014 08:23:09.342840    3988 round_trippers.go:580]     Audit-Id: 66f69bb9-3c3e-4eb0-acf4-518de661f344
	I1014 08:23:09.342840    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:09.342840    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:09.342840    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:09.342840    3988 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1014 08:23:09.343101    3988 api_server.go:141] control plane version: v1.31.1
	I1014 08:23:09.343165    3988 api_server.go:131] duration metric: took 11.055ms to wait for apiserver health ...
	I1014 08:23:09.343165    3988 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 08:23:09.483288    3988 request.go:632] Waited for 140.0208ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.100.167:8443/api/v1/namespaces/kube-system/pods
	I1014 08:23:09.483932    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/namespaces/kube-system/pods
	I1014 08:23:09.483932    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:09.483932    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:09.483932    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:09.490017    3988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:23:09.490088    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:09.490151    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:09.490151    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:09.490151    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:09.490151    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:09 GMT
	I1014 08:23:09.490151    3988 round_trippers.go:580]     Audit-Id: fb0c89ec-df65-448a-b6aa-dcd70500404f
	I1014 08:23:09.490151    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:09.491522    3988 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"455"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"451","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57976 chars]
	I1014 08:23:09.494392    3988 system_pods.go:59] 8 kube-system pods found
	I1014 08:23:09.494392    3988 system_pods.go:61] "coredns-7c65d6cfc9-fs9ct" [fd736862-9e3e-4a3d-9a86-08efd2338477] Running
	I1014 08:23:09.494392    3988 system_pods.go:61] "etcd-multinode-671000" [56dfdf16-1224-41e3-94de-9d7f4021a17d] Running
	I1014 08:23:09.494392    3988 system_pods.go:61] "kindnet-wqrx6" [a508bbf9-7565-4c73-98cb-e9684985c298] Running
	I1014 08:23:09.494392    3988 system_pods.go:61] "kube-apiserver-multinode-671000" [80ea37b8-9db1-4a39-9e9e-51c01edadfb1] Running
	I1014 08:23:09.494392    3988 system_pods.go:61] "kube-controller-manager-multinode-671000" [a5c7bb80-c844-476f-ba47-1cd4e599b92d] Running
	I1014 08:23:09.494392    3988 system_pods.go:61] "kube-proxy-r74dx" [f8d14473-8859-4015-84e9-d00656cc00c9] Running
	I1014 08:23:09.494392    3988 system_pods.go:61] "kube-scheduler-multinode-671000" [97febcab-f54d-4338-ba7c-2dc5e69b77fc] Running
	I1014 08:23:09.494392    3988 system_pods.go:61] "storage-provisioner" [fde8ff75-bc7f-4db4-b098-c3a08b38d205] Running
	I1014 08:23:09.494392    3988 system_pods.go:74] duration metric: took 151.2261ms to wait for pod list to return data ...
	I1014 08:23:09.494392    3988 default_sa.go:34] waiting for default service account to be created ...
	I1014 08:23:09.682802    3988 request.go:632] Waited for 188.4102ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.100.167:8443/api/v1/namespaces/default/serviceaccounts
	I1014 08:23:09.682802    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/namespaces/default/serviceaccounts
	I1014 08:23:09.682802    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:09.682802    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:09.682802    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:09.689304    3988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:23:09.689304    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:09.689429    3988 round_trippers.go:580]     Audit-Id: dc83ff4a-a8fa-456a-b290-ee01cfa83bed
	I1014 08:23:09.689429    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:09.689429    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:09.689429    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:09.689429    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:09.689429    3988 round_trippers.go:580]     Content-Length: 261
	I1014 08:23:09.689429    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:09 GMT
	I1014 08:23:09.689546    3988 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"455"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2d7618c1-d4b9-4719-9d93-d87bd887238a","resourceVersion":"332","creationTimestamp":"2024-10-14T15:22:44Z"}}]}
	I1014 08:23:09.690129    3988 default_sa.go:45] found service account: "default"
	I1014 08:23:09.690129    3988 default_sa.go:55] duration metric: took 195.7369ms for default service account to be created ...
	I1014 08:23:09.690236    3988 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 08:23:09.883337    3988 request.go:632] Waited for 192.9922ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.100.167:8443/api/v1/namespaces/kube-system/pods
	I1014 08:23:09.883827    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/namespaces/kube-system/pods
	I1014 08:23:09.883827    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:09.883909    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:09.883909    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:09.896627    3988 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1014 08:23:09.896703    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:09.896703    3988 round_trippers.go:580]     Audit-Id: da98cb32-8c2d-4383-8a8b-10cb8f575fcc
	I1014 08:23:09.896703    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:09.896703    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:09.896703    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:09.896703    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:09.896703    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:09 GMT
	I1014 08:23:09.898342    3988 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"456"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"451","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57976 chars]
	I1014 08:23:09.901676    3988 system_pods.go:86] 8 kube-system pods found
	I1014 08:23:09.901676    3988 system_pods.go:89] "coredns-7c65d6cfc9-fs9ct" [fd736862-9e3e-4a3d-9a86-08efd2338477] Running
	I1014 08:23:09.901676    3988 system_pods.go:89] "etcd-multinode-671000" [56dfdf16-1224-41e3-94de-9d7f4021a17d] Running
	I1014 08:23:09.901676    3988 system_pods.go:89] "kindnet-wqrx6" [a508bbf9-7565-4c73-98cb-e9684985c298] Running
	I1014 08:23:09.901676    3988 system_pods.go:89] "kube-apiserver-multinode-671000" [80ea37b8-9db1-4a39-9e9e-51c01edadfb1] Running
	I1014 08:23:09.901676    3988 system_pods.go:89] "kube-controller-manager-multinode-671000" [a5c7bb80-c844-476f-ba47-1cd4e599b92d] Running
	I1014 08:23:09.901676    3988 system_pods.go:89] "kube-proxy-r74dx" [f8d14473-8859-4015-84e9-d00656cc00c9] Running
	I1014 08:23:09.901676    3988 system_pods.go:89] "kube-scheduler-multinode-671000" [97febcab-f54d-4338-ba7c-2dc5e69b77fc] Running
	I1014 08:23:09.901676    3988 system_pods.go:89] "storage-provisioner" [fde8ff75-bc7f-4db4-b098-c3a08b38d205] Running
	I1014 08:23:09.901676    3988 system_pods.go:126] duration metric: took 211.4394ms to wait for k8s-apps to be running ...
	I1014 08:23:09.901676    3988 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 08:23:09.912300    3988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 08:23:09.939592    3988 system_svc.go:56] duration metric: took 37.9164ms WaitForService to wait for kubelet
	I1014 08:23:09.939660    3988 kubeadm.go:582] duration metric: took 24.9929009s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 08:23:09.939724    3988 node_conditions.go:102] verifying NodePressure condition ...
	I1014 08:23:10.083491    3988 request.go:632] Waited for 143.6293ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.100.167:8443/api/v1/nodes
	I1014 08:23:10.083491    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes
	I1014 08:23:10.083491    3988 round_trippers.go:469] Request Headers:
	I1014 08:23:10.083491    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:23:10.083491    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:23:10.087596    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:23:10.087596    3988 round_trippers.go:577] Response Headers:
	I1014 08:23:10.088131    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:23:10.088131    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:23:10 GMT
	I1014 08:23:10.088131    3988 round_trippers.go:580]     Audit-Id: 3831867c-c560-4b63-b15c-f751a1e8daf6
	I1014 08:23:10.088131    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:23:10.088131    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:23:10.088131    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:23:10.088334    3988 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"456"},"items":[{"metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"430","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4836 chars]
	I1014 08:23:10.088954    3988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 08:23:10.089138    3988 node_conditions.go:123] node cpu capacity is 2
	I1014 08:23:10.089138    3988 node_conditions.go:105] duration metric: took 149.4135ms to run NodePressure ...
	I1014 08:23:10.089138    3988 start.go:241] waiting for startup goroutines ...
	I1014 08:23:10.089201    3988 start.go:246] waiting for cluster config update ...
	I1014 08:23:10.089201    3988 start.go:255] writing updated cluster config ...
	I1014 08:23:10.093230    3988 out.go:201] 
	I1014 08:23:10.112648    3988 config.go:182] Loaded profile config "multinode-671000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 08:23:10.112764    3988 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\config.json ...
	I1014 08:23:10.119018    3988 out.go:177] * Starting "multinode-671000-m02" worker node in "multinode-671000" cluster
	I1014 08:23:10.121328    3988 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 08:23:10.121328    3988 cache.go:56] Caching tarball of preloaded images
	I1014 08:23:10.121328    3988 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1014 08:23:10.121328    3988 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 08:23:10.122334    3988 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\config.json ...
	I1014 08:23:10.132936    3988 start.go:360] acquireMachinesLock for multinode-671000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 08:23:10.132936    3988 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-671000-m02"
	I1014 08:23:10.132936    3988 start.go:93] Provisioning new machine with config: &{Name:multinode-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:multinode-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.100.167 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1014 08:23:10.132936    3988 start.go:125] createHost starting for "m02" (driver="hyperv")
	I1014 08:23:10.138059    3988 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 08:23:10.138059    3988 start.go:159] libmachine.API.Create for "multinode-671000" (driver="hyperv")
	I1014 08:23:10.138878    3988 client.go:168] LocalClient.Create starting
	I1014 08:23:10.139215    3988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I1014 08:23:10.139215    3988 main.go:141] libmachine: Decoding PEM data...
	I1014 08:23:10.139215    3988 main.go:141] libmachine: Parsing certificate...
	I1014 08:23:10.139906    3988 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I1014 08:23:10.140110    3988 main.go:141] libmachine: Decoding PEM data...
	I1014 08:23:10.140110    3988 main.go:141] libmachine: Parsing certificate...
	I1014 08:23:10.140110    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I1014 08:23:12.083382    3988 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I1014 08:23:12.083512    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:23:12.083604    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I1014 08:23:13.783220    3988 main.go:141] libmachine: [stdout =====>] : False
	
	I1014 08:23:13.783220    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:23:13.783913    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 08:23:15.240767    3988 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 08:23:15.240767    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:23:15.240901    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 08:23:18.723156    3988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 08:23:18.723363    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:23:18.725505    3988 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 08:23:19.234224    3988 main.go:141] libmachine: Creating SSH key...
	I1014 08:23:19.839987    3988 main.go:141] libmachine: Creating VM...
	I1014 08:23:19.839987    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I1014 08:23:22.670470    3988 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I1014 08:23:22.670531    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:23:22.670531    3988 main.go:141] libmachine: Using switch "Default Switch"
	I1014 08:23:22.670531    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I1014 08:23:24.435414    3988 main.go:141] libmachine: [stdout =====>] : True
	
	I1014 08:23:24.435414    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:23:24.435414    3988 main.go:141] libmachine: Creating VHD
	I1014 08:23:24.435414    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I1014 08:23:28.090559    3988 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 9BF83274-A034-4723-AB0A-01640485F593
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I1014 08:23:28.091301    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:23:28.091301    3988 main.go:141] libmachine: Writing magic tar header
	I1014 08:23:28.091301    3988 main.go:141] libmachine: Writing SSH key tar header
	I1014 08:23:28.104888    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I1014 08:23:31.159240    3988 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:23:31.160253    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:23:31.160303    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000-m02\disk.vhd' -SizeBytes 20000MB
	I1014 08:23:33.800445    3988 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:23:33.800445    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:23:33.800649    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-671000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I1014 08:23:37.311443    3988 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-671000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I1014 08:23:37.311474    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:23:37.311474    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-671000-m02 -DynamicMemoryEnabled $false
	I1014 08:23:39.484236    3988 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:23:39.484905    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:23:39.485046    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-671000-m02 -Count 2
	I1014 08:23:41.553080    3988 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:23:41.553080    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:23:41.553080    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-671000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000-m02\boot2docker.iso'
	I1014 08:23:44.083753    3988 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:23:44.084126    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:23:44.084221    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-671000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000-m02\disk.vhd'
	I1014 08:23:46.696054    3988 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:23:46.696784    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:23:46.696784    3988 main.go:141] libmachine: Starting VM...
	I1014 08:23:46.696863    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-671000-m02
	I1014 08:23:49.914873    3988 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:23:49.915452    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:23:49.915452    3988 main.go:141] libmachine: Waiting for host to start...
	I1014 08:23:49.915452    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:23:52.177805    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:23:52.177975    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:23:52.178039    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:23:54.661979    3988 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:23:54.662040    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:23:55.663339    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:23:57.819908    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:23:57.820245    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:23:57.820245    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:24:00.346408    3988 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:24:00.346705    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:24:01.347039    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:24:03.538031    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:24:03.538089    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:24:03.538089    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:24:06.025009    3988 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:24:06.025009    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:24:07.025806    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:24:09.212273    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:24:09.212273    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:24:09.212490    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:24:11.720899    3988 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:24:11.721399    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:24:12.722207    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:24:14.948202    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:24:14.948202    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:24:14.948202    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:24:17.539243    3988 main.go:141] libmachine: [stdout =====>] : 172.20.109.137
	
	I1014 08:24:17.540326    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:24:17.540326    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:24:19.667474    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:24:19.667717    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:24:19.667717    3988 machine.go:93] provisionDockerMachine start ...
	I1014 08:24:19.667778    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:24:21.785909    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:24:21.785909    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:24:21.786689    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:24:24.331260    3988 main.go:141] libmachine: [stdout =====>] : 172.20.109.137
	
	I1014 08:24:24.331260    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:24:24.337579    3988 main.go:141] libmachine: Using SSH client type: native
	I1014 08:24:24.350415    3988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.109.137 22 <nil> <nil>}
	I1014 08:24:24.351581    3988 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 08:24:24.488677    3988 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 08:24:24.488748    3988 buildroot.go:166] provisioning hostname "multinode-671000-m02"
	I1014 08:24:24.488837    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:24:26.583417    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:24:26.583417    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:24:26.583911    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:24:29.075001    3988 main.go:141] libmachine: [stdout =====>] : 172.20.109.137
	
	I1014 08:24:29.075001    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:24:29.081800    3988 main.go:141] libmachine: Using SSH client type: native
	I1014 08:24:29.082334    3988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.109.137 22 <nil> <nil>}
	I1014 08:24:29.082413    3988 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-671000-m02 && echo "multinode-671000-m02" | sudo tee /etc/hostname
	I1014 08:24:29.240334    3988 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-671000-m02
	
	I1014 08:24:29.240488    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:24:31.306195    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:24:31.307125    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:24:31.307125    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:24:33.768323    3988 main.go:141] libmachine: [stdout =====>] : 172.20.109.137
	
	I1014 08:24:33.768323    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:24:33.774355    3988 main.go:141] libmachine: Using SSH client type: native
	I1014 08:24:33.775284    3988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.109.137 22 <nil> <nil>}
	I1014 08:24:33.775284    3988 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-671000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-671000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-671000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 08:24:33.938309    3988 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 08:24:33.938487    3988 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1014 08:24:33.938594    3988 buildroot.go:174] setting up certificates
	I1014 08:24:33.938594    3988 provision.go:84] configureAuth start
	I1014 08:24:33.938684    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:24:36.078967    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:24:36.079156    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:24:36.079156    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:24:38.579489    3988 main.go:141] libmachine: [stdout =====>] : 172.20.109.137
	
	I1014 08:24:38.579687    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:24:38.579687    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:24:40.730891    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:24:40.730891    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:24:40.731019    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:24:43.277783    3988 main.go:141] libmachine: [stdout =====>] : 172.20.109.137
	
	I1014 08:24:43.277783    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:24:43.277783    3988 provision.go:143] copyHostCerts
	I1014 08:24:43.277783    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1014 08:24:43.277783    3988 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1014 08:24:43.278365    3988 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1014 08:24:43.278817    3988 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1014 08:24:43.279722    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1014 08:24:43.280346    3988 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1014 08:24:43.280346    3988 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1014 08:24:43.280346    3988 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1014 08:24:43.281816    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1014 08:24:43.281816    3988 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1014 08:24:43.281816    3988 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1014 08:24:43.282699    3988 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I1014 08:24:43.283464    3988 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-671000-m02 san=[127.0.0.1 172.20.109.137 localhost minikube multinode-671000-m02]
	I1014 08:24:43.522251    3988 provision.go:177] copyRemoteCerts
	I1014 08:24:43.533469    3988 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 08:24:43.533666    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:24:45.645964    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:24:45.645964    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:24:45.646759    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:24:48.198948    3988 main.go:141] libmachine: [stdout =====>] : 172.20.109.137
	
	I1014 08:24:48.198948    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:24:48.199195    3988 sshutil.go:53] new ssh client: &{IP:172.20.109.137 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000-m02\id_rsa Username:docker}
	I1014 08:24:48.313486    3988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7799292s)
	I1014 08:24:48.313486    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1014 08:24:48.314804    3988 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 08:24:48.367271    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1014 08:24:48.367854    3988 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I1014 08:24:48.419957    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1014 08:24:48.420453    3988 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 08:24:48.469397    3988 provision.go:87] duration metric: took 14.5307809s to configureAuth
	I1014 08:24:48.469472    3988 buildroot.go:189] setting minikube options for container-runtime
	I1014 08:24:48.470209    3988 config.go:182] Loaded profile config "multinode-671000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 08:24:48.470336    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:24:50.613107    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:24:50.613604    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:24:50.613604    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:24:53.138550    3988 main.go:141] libmachine: [stdout =====>] : 172.20.109.137
	
	I1014 08:24:53.138716    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:24:53.143599    3988 main.go:141] libmachine: Using SSH client type: native
	I1014 08:24:53.144132    3988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.109.137 22 <nil> <nil>}
	I1014 08:24:53.144132    3988 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1014 08:24:53.278007    3988 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1014 08:24:53.278062    3988 buildroot.go:70] root file system type: tmpfs
	I1014 08:24:53.278062    3988 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1014 08:24:53.278062    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:24:55.365416    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:24:55.365416    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:24:55.365878    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:24:57.877197    3988 main.go:141] libmachine: [stdout =====>] : 172.20.109.137
	
	I1014 08:24:57.877648    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:24:57.884927    3988 main.go:141] libmachine: Using SSH client type: native
	I1014 08:24:57.885268    3988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.109.137 22 <nil> <nil>}
	I1014 08:24:57.885268    3988 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.20.100.167"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1014 08:24:58.048952    3988 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.20.100.167
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1014 08:24:58.048952    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:25:00.169714    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:25:00.169714    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:25:00.170739    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:25:02.746513    3988 main.go:141] libmachine: [stdout =====>] : 172.20.109.137
	
	I1014 08:25:02.746513    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:25:02.752728    3988 main.go:141] libmachine: Using SSH client type: native
	I1014 08:25:02.753491    3988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.109.137 22 <nil> <nil>}
	I1014 08:25:02.753491    3988 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1014 08:25:04.983768    3988 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1014 08:25:04.983835    3988 machine.go:96] duration metric: took 45.3160504s to provisionDockerMachine
	I1014 08:25:04.983835    3988 client.go:171] duration metric: took 1m54.844703s to LocalClient.Create
	I1014 08:25:04.983905    3988 start.go:167] duration metric: took 1m54.8456743s to libmachine.API.Create "multinode-671000"
	I1014 08:25:04.983991    3988 start.go:293] postStartSetup for "multinode-671000-m02" (driver="hyperv")
	I1014 08:25:04.983991    3988 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 08:25:04.995749    3988 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 08:25:04.995749    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:25:07.136217    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:25:07.136217    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:25:07.136541    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:25:09.705929    3988 main.go:141] libmachine: [stdout =====>] : 172.20.109.137
	
	I1014 08:25:09.705929    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:25:09.706908    3988 sshutil.go:53] new ssh client: &{IP:172.20.109.137 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000-m02\id_rsa Username:docker}
	I1014 08:25:09.821046    3988 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8251946s)
	I1014 08:25:09.832897    3988 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 08:25:09.839232    3988 command_runner.go:130] > NAME=Buildroot
	I1014 08:25:09.839232    3988 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1014 08:25:09.839232    3988 command_runner.go:130] > ID=buildroot
	I1014 08:25:09.839232    3988 command_runner.go:130] > VERSION_ID=2023.02.9
	I1014 08:25:09.839232    3988 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1014 08:25:09.839232    3988 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 08:25:09.839232    3988 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1014 08:25:09.839771    3988 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1014 08:25:09.840557    3988 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> 9362.pem in /etc/ssl/certs
	I1014 08:25:09.840557    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /etc/ssl/certs/9362.pem
	I1014 08:25:09.852460    3988 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 08:25:09.872498    3988 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /etc/ssl/certs/9362.pem (1708 bytes)
	I1014 08:25:09.923207    3988 start.go:296] duration metric: took 4.9392082s for postStartSetup
	I1014 08:25:09.926598    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:25:12.011686    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:25:12.012714    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:25:12.012760    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:25:14.498458    3988 main.go:141] libmachine: [stdout =====>] : 172.20.109.137
	
	I1014 08:25:14.498458    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:25:14.498650    3988 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\config.json ...
	I1014 08:25:14.501398    3988 start.go:128] duration metric: took 2m4.3682756s to createHost
	I1014 08:25:14.501513    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:25:16.584845    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:25:16.585158    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:25:16.585275    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:25:19.064307    3988 main.go:141] libmachine: [stdout =====>] : 172.20.109.137
	
	I1014 08:25:19.064307    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:25:19.071146    3988 main.go:141] libmachine: Using SSH client type: native
	I1014 08:25:19.071702    3988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.109.137 22 <nil> <nil>}
	I1014 08:25:19.071702    3988 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 08:25:19.201739    3988 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728919519.201207317
	
	I1014 08:25:19.201739    3988 fix.go:216] guest clock: 1728919519.201207317
	I1014 08:25:19.201739    3988 fix.go:229] Guest: 2024-10-14 08:25:19.201207317 -0700 PDT Remote: 2024-10-14 08:25:14.5015135 -0700 PDT m=+334.842187801 (delta=4.699693817s)
	I1014 08:25:19.201739    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:25:21.260558    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:25:21.260703    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:25:21.260784    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:25:23.721092    3988 main.go:141] libmachine: [stdout =====>] : 172.20.109.137
	
	I1014 08:25:23.722115    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:25:23.728090    3988 main.go:141] libmachine: Using SSH client type: native
	I1014 08:25:23.728575    3988 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.109.137 22 <nil> <nil>}
	I1014 08:25:23.728575    3988 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1728919519
	I1014 08:25:23.871425    3988 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 14 15:25:19 UTC 2024
	
	I1014 08:25:23.871425    3988 fix.go:236] clock set: Mon Oct 14 15:25:19 UTC 2024
	 (err=<nil>)
	I1014 08:25:23.871425    3988 start.go:83] releasing machines lock for "multinode-671000-m02", held for 2m13.7382882s
	I1014 08:25:23.872480    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:25:25.940989    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:25:25.940989    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:25:25.941787    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:25:28.394782    3988 main.go:141] libmachine: [stdout =====>] : 172.20.109.137
	
	I1014 08:25:28.394782    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:25:28.398742    3988 out.go:177] * Found network options:
	I1014 08:25:28.402040    3988 out.go:177]   - NO_PROXY=172.20.100.167
	W1014 08:25:28.404706    3988 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 08:25:28.407163    3988 out.go:177]   - NO_PROXY=172.20.100.167
	W1014 08:25:28.408819    3988 proxy.go:119] fail to check proxy env: Error ip not in block
	W1014 08:25:28.410911    3988 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 08:25:28.413068    3988 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1014 08:25:28.413068    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:25:28.421778    3988 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1014 08:25:28.421778    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:25:30.567144    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:25:30.567144    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:25:30.567471    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:25:30.567471    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:25:30.567471    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:25:30.567471    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:25:33.184356    3988 main.go:141] libmachine: [stdout =====>] : 172.20.109.137
	
	I1014 08:25:33.184356    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:25:33.184590    3988 sshutil.go:53] new ssh client: &{IP:172.20.109.137 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000-m02\id_rsa Username:docker}
	I1014 08:25:33.211650    3988 main.go:141] libmachine: [stdout =====>] : 172.20.109.137
	
	I1014 08:25:33.211650    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:25:33.211964    3988 sshutil.go:53] new ssh client: &{IP:172.20.109.137 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000-m02\id_rsa Username:docker}
	I1014 08:25:33.283731    3988 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I1014 08:25:33.283815    3988 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.8620297s)
	W1014 08:25:33.283935    3988 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 08:25:33.295444    3988 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 08:25:33.300476    3988 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I1014 08:25:33.300932    3988 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8874005s)
	W1014 08:25:33.300932    3988 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1014 08:25:33.330385    3988 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1014 08:25:33.330385    3988 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 08:25:33.330385    3988 start.go:495] detecting cgroup driver to use...
	I1014 08:25:33.330385    3988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 08:25:33.365225    3988 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1014 08:25:33.376704    3988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1014 08:25:33.412420    3988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1014 08:25:33.436179    3988 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1014 08:25:33.436179    3988 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1014 08:25:33.439283    3988 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 08:25:33.450711    3988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 08:25:33.482695    3988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 08:25:33.512093    3988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 08:25:33.541109    3988 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 08:25:33.572338    3988 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 08:25:33.603860    3988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 08:25:33.634281    3988 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 08:25:33.664934    3988 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 08:25:33.697051    3988 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 08:25:33.714061    3988 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 08:25:33.714865    3988 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 08:25:33.726270    3988 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 08:25:33.756569    3988 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 08:25:33.783004    3988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:25:33.992398    3988 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 08:25:34.027800    3988 start.go:495] detecting cgroup driver to use...
	I1014 08:25:34.039068    3988 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1014 08:25:34.064352    3988 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1014 08:25:34.064352    3988 command_runner.go:130] > [Unit]
	I1014 08:25:34.064352    3988 command_runner.go:130] > Description=Docker Application Container Engine
	I1014 08:25:34.064352    3988 command_runner.go:130] > Documentation=https://docs.docker.com
	I1014 08:25:34.064352    3988 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1014 08:25:34.064352    3988 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1014 08:25:34.064352    3988 command_runner.go:130] > StartLimitBurst=3
	I1014 08:25:34.064352    3988 command_runner.go:130] > StartLimitIntervalSec=60
	I1014 08:25:34.064352    3988 command_runner.go:130] > [Service]
	I1014 08:25:34.064352    3988 command_runner.go:130] > Type=notify
	I1014 08:25:34.064352    3988 command_runner.go:130] > Restart=on-failure
	I1014 08:25:34.064352    3988 command_runner.go:130] > Environment=NO_PROXY=172.20.100.167
	I1014 08:25:34.064352    3988 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1014 08:25:34.064352    3988 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1014 08:25:34.064352    3988 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1014 08:25:34.064352    3988 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1014 08:25:34.064352    3988 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1014 08:25:34.064352    3988 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1014 08:25:34.064352    3988 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1014 08:25:34.064352    3988 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1014 08:25:34.064352    3988 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1014 08:25:34.064352    3988 command_runner.go:130] > ExecStart=
	I1014 08:25:34.064352    3988 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1014 08:25:34.064352    3988 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1014 08:25:34.064888    3988 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1014 08:25:34.064888    3988 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1014 08:25:34.064983    3988 command_runner.go:130] > LimitNOFILE=infinity
	I1014 08:25:34.064983    3988 command_runner.go:130] > LimitNPROC=infinity
	I1014 08:25:34.064983    3988 command_runner.go:130] > LimitCORE=infinity
	I1014 08:25:34.064983    3988 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1014 08:25:34.065056    3988 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1014 08:25:34.065056    3988 command_runner.go:130] > TasksMax=infinity
	I1014 08:25:34.065056    3988 command_runner.go:130] > TimeoutStartSec=0
	I1014 08:25:34.065100    3988 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1014 08:25:34.065136    3988 command_runner.go:130] > Delegate=yes
	I1014 08:25:34.065136    3988 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1014 08:25:34.065136    3988 command_runner.go:130] > KillMode=process
	I1014 08:25:34.065136    3988 command_runner.go:130] > [Install]
	I1014 08:25:34.065136    3988 command_runner.go:130] > WantedBy=multi-user.target
	I1014 08:25:34.077968    3988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 08:25:34.117594    3988 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 08:25:34.163047    3988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 08:25:34.199361    3988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 08:25:34.233135    3988 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 08:25:34.296743    3988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 08:25:34.320843    3988 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 08:25:34.353670    3988 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1014 08:25:34.369724    3988 ssh_runner.go:195] Run: which cri-dockerd
	I1014 08:25:34.375448    3988 command_runner.go:130] > /usr/bin/cri-dockerd
	I1014 08:25:34.386039    3988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1014 08:25:34.402975    3988 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1014 08:25:34.446167    3988 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1014 08:25:34.640504    3988 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1014 08:25:34.819223    3988 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1014 08:25:34.819223    3988 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1014 08:25:34.863724    3988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:25:35.051257    3988 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 08:25:37.645935    3988 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5945161s)
	I1014 08:25:37.657612    3988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1014 08:25:37.694241    3988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 08:25:37.728703    3988 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1014 08:25:37.920961    3988 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1014 08:25:38.119353    3988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:25:38.311786    3988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1014 08:25:38.354423    3988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 08:25:38.386579    3988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:25:38.573420    3988 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1014 08:25:38.685040    3988 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1014 08:25:38.695035    3988 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1014 08:25:38.704920    3988 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1014 08:25:38.704920    3988 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1014 08:25:38.704920    3988 command_runner.go:130] > Device: 0,22	Inode: 886         Links: 1
	I1014 08:25:38.705068    3988 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1014 08:25:38.705068    3988 command_runner.go:130] > Access: 2024-10-14 15:25:38.595610809 +0000
	I1014 08:25:38.705134    3988 command_runner.go:130] > Modify: 2024-10-14 15:25:38.595610809 +0000
	I1014 08:25:38.705134    3988 command_runner.go:130] > Change: 2024-10-14 15:25:38.599610810 +0000
	I1014 08:25:38.705134    3988 command_runner.go:130] >  Birth: -
	I1014 08:25:38.705224    3988 start.go:563] Will wait 60s for crictl version
	I1014 08:25:38.714033    3988 ssh_runner.go:195] Run: which crictl
	I1014 08:25:38.720046    3988 command_runner.go:130] > /usr/bin/crictl
	I1014 08:25:38.732903    3988 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 08:25:38.787234    3988 command_runner.go:130] > Version:  0.1.0
	I1014 08:25:38.787234    3988 command_runner.go:130] > RuntimeName:  docker
	I1014 08:25:38.787234    3988 command_runner.go:130] > RuntimeVersion:  27.3.1
	I1014 08:25:38.787234    3988 command_runner.go:130] > RuntimeApiVersion:  v1
	I1014 08:25:38.787234    3988 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1014 08:25:38.796063    3988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 08:25:38.827114    3988 command_runner.go:130] > 27.3.1
	I1014 08:25:38.835109    3988 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 08:25:38.867789    3988 command_runner.go:130] > 27.3.1
	I1014 08:25:38.871752    3988 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1014 08:25:38.874755    3988 out.go:177]   - env NO_PROXY=172.20.100.167
	I1014 08:25:38.877752    3988 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1014 08:25:38.881749    3988 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1014 08:25:38.881749    3988 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1014 08:25:38.881749    3988 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1014 08:25:38.881749    3988 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:08:65:4d Flags:up|broadcast|multicast|running}
	I1014 08:25:38.884747    3988 ip.go:214] interface addr: fe80::b548:ff79:d464:75b8/64
	I1014 08:25:38.884747    3988 ip.go:214] interface addr: 172.20.96.1/20
	I1014 08:25:38.893749    3988 ssh_runner.go:195] Run: grep 172.20.96.1	host.minikube.internal$ /etc/hosts
	I1014 08:25:38.900803    3988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 08:25:38.923142    3988 mustload.go:65] Loading cluster: multinode-671000
	I1014 08:25:38.924284    3988 config.go:182] Loaded profile config "multinode-671000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 08:25:38.925274    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:25:40.999589    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:25:40.999589    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:25:40.999589    3988 host.go:66] Checking if "multinode-671000" exists ...
	I1014 08:25:41.000406    3988 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000 for IP: 172.20.109.137
	I1014 08:25:41.000406    3988 certs.go:194] generating shared ca certs ...
	I1014 08:25:41.000406    3988 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 08:25:41.001213    3988 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I1014 08:25:41.001584    3988 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I1014 08:25:41.001745    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 08:25:41.002060    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1014 08:25:41.002269    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 08:25:41.002593    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 08:25:41.003209    3988 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem (1338 bytes)
	W1014 08:25:41.003464    3988 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936_empty.pem, impossibly tiny 0 bytes
	I1014 08:25:41.003671    3988 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1014 08:25:41.003745    3988 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1014 08:25:41.004290    3988 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1014 08:25:41.004796    3988 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1014 08:25:41.005336    3988 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem (1708 bytes)
	I1014 08:25:41.005521    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem -> /usr/share/ca-certificates/936.pem
	I1014 08:25:41.005724    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /usr/share/ca-certificates/9362.pem
	I1014 08:25:41.005973    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 08:25:41.005973    3988 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 08:25:41.053843    3988 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 08:25:41.102402    3988 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 08:25:41.148416    3988 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 08:25:41.193365    3988 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem --> /usr/share/ca-certificates/936.pem (1338 bytes)
	I1014 08:25:41.243456    3988 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /usr/share/ca-certificates/9362.pem (1708 bytes)
	I1014 08:25:41.292919    3988 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 08:25:41.353906    3988 ssh_runner.go:195] Run: openssl version
	I1014 08:25:41.363200    3988 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1014 08:25:41.374127    3988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936.pem && ln -fs /usr/share/ca-certificates/936.pem /etc/ssl/certs/936.pem"
	I1014 08:25:41.402932    3988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936.pem
	I1014 08:25:41.411345    3988 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 14 14:00 /usr/share/ca-certificates/936.pem
	I1014 08:25:41.411463    3988 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 14:00 /usr/share/ca-certificates/936.pem
	I1014 08:25:41.421636    3988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936.pem
	I1014 08:25:41.429677    3988 command_runner.go:130] > 51391683
	I1014 08:25:41.440255    3988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/936.pem /etc/ssl/certs/51391683.0"
	I1014 08:25:41.470801    3988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9362.pem && ln -fs /usr/share/ca-certificates/9362.pem /etc/ssl/certs/9362.pem"
	I1014 08:25:41.500176    3988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9362.pem
	I1014 08:25:41.506977    3988 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 14 14:00 /usr/share/ca-certificates/9362.pem
	I1014 08:25:41.507455    3988 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 14:00 /usr/share/ca-certificates/9362.pem
	I1014 08:25:41.518166    3988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9362.pem
	I1014 08:25:41.526906    3988 command_runner.go:130] > 3ec20f2e
	I1014 08:25:41.537411    3988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9362.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 08:25:41.567010    3988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 08:25:41.595036    3988 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 08:25:41.601521    3988 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 14 13:44 /usr/share/ca-certificates/minikubeCA.pem
	I1014 08:25:41.601571    3988 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:44 /usr/share/ca-certificates/minikubeCA.pem
	I1014 08:25:41.612544    3988 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 08:25:41.621656    3988 command_runner.go:130] > b5213941
	I1014 08:25:41.631774    3988 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 08:25:41.662191    3988 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 08:25:41.670252    3988 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 08:25:41.670317    3988 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 08:25:41.670583    3988 kubeadm.go:934] updating node {m02 172.20.109.137 8443 v1.31.1 docker false true} ...
	I1014 08:25:41.670636    3988 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-671000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.109.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 08:25:41.681066    3988 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 08:25:41.698085    3988 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	I1014 08:25:41.698538    3988 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1014 08:25:41.709451    3988 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1014 08:25:41.727165    3988 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1014 08:25:41.727165    3988 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1014 08:25:41.727165    3988 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1014 08:25:41.728158    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1014 08:25:41.728158    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1014 08:25:41.740157    3988 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1014 08:25:41.740157    3988 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1014 08:25:41.741169    3988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 08:25:41.753289    3988 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1014 08:25:41.753357    3988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1014 08:25:41.753562    3988 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1014 08:25:41.779126    3988 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1014 08:25:41.779126    3988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1014 08:25:41.779621    3988 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1014 08:25:41.779692    3988 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1014 08:25:41.791259    3988 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1014 08:25:41.842762    3988 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1014 08:25:41.844304    3988 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1014 08:25:41.844376    3988 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\linux\amd64\v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1014 08:25:43.730930    3988 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1014 08:25:43.751836    3988 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1014 08:25:43.785349    3988 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 08:25:43.826836    3988 ssh_runner.go:195] Run: grep 172.20.100.167	control-plane.minikube.internal$ /etc/hosts
	I1014 08:25:43.834404    3988 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.100.167	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 08:25:43.867380    3988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:25:44.065634    3988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 08:25:44.097720    3988 host.go:66] Checking if "multinode-671000" exists ...
	I1014 08:25:44.098526    3988 start.go:317] joinCluster: &{Name:multinode-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.100.167 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.109.137 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 08:25:44.098526    3988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1014 08:25:44.099133    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:25:46.160545    3988 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:25:46.160545    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:25:46.160545    3988 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:25:48.627740    3988 main.go:141] libmachine: [stdout =====>] : 172.20.100.167
	
	I1014 08:25:48.627906    3988 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:25:48.628114    3988 sshutil.go:53] new ssh client: &{IP:172.20.100.167 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000\id_rsa Username:docker}
	I1014 08:25:48.807878    3988 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token qw6odv.goexi2byty1iz0t9 --discovery-token-ca-cert-hash sha256:f876f951efbe7f1ed47ca578ee0d33e6ce8fe69c1a008a8154ee00f56ac84e46 
	I1014 08:25:48.807878    3988 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.7093443s)
	I1014 08:25:48.807878    3988 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.20.109.137 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1014 08:25:48.807878    3988 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qw6odv.goexi2byty1iz0t9 --discovery-token-ca-cert-hash sha256:f876f951efbe7f1ed47ca578ee0d33e6ce8fe69c1a008a8154ee00f56ac84e46 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-671000-m02"
	I1014 08:25:48.993655    3988 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 08:25:50.334136    3988 command_runner.go:130] > [preflight] Running pre-flight checks
	I1014 08:25:50.334280    3988 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1014 08:25:50.334280    3988 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1014 08:25:50.334280    3988 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 08:25:50.334446    3988 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 08:25:50.334446    3988 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1014 08:25:50.334446    3988 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 08:25:50.334565    3988 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 514.701672ms
	I1014 08:25:50.334565    3988 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I1014 08:25:50.334701    3988 command_runner.go:130] > This node has joined the cluster:
	I1014 08:25:50.334701    3988 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1014 08:25:50.334951    3988 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1014 08:25:50.334951    3988 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1014 08:25:50.335114    3988 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token qw6odv.goexi2byty1iz0t9 --discovery-token-ca-cert-hash sha256:f876f951efbe7f1ed47ca578ee0d33e6ce8fe69c1a008a8154ee00f56ac84e46 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-671000-m02": (1.5270714s)
	I1014 08:25:50.335114    3988 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1014 08:25:50.547149    3988 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I1014 08:25:50.738259    3988 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-671000-m02 minikube.k8s.io/updated_at=2024_10_14T08_25_50_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=multinode-671000 minikube.k8s.io/primary=false
	I1014 08:25:50.886026    3988 command_runner.go:130] > node/multinode-671000-m02 labeled
	I1014 08:25:50.886115    3988 start.go:319] duration metric: took 6.7875782s to joinCluster
	I1014 08:25:50.886115    3988 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.20.109.137 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1014 08:25:50.886877    3988 config.go:182] Loaded profile config "multinode-671000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 08:25:50.889071    3988 out.go:177] * Verifying Kubernetes components...
	I1014 08:25:50.903685    3988 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:25:51.113159    3988 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 08:25:51.140100    3988 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 08:25:51.140958    3988 kapi.go:59] client config for multinode-671000: &rest.Config{Host:"https://172.20.100.167:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-671000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-671000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2926ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 08:25:51.142677    3988 node_ready.go:35] waiting up to 6m0s for node "multinode-671000-m02" to be "Ready" ...
	I1014 08:25:51.142923    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:25:51.142980    3988 round_trippers.go:469] Request Headers:
	I1014 08:25:51.142980    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:25:51.142980    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:25:51.155605    3988 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1014 08:25:51.155751    3988 round_trippers.go:577] Response Headers:
	I1014 08:25:51.155751    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:25:51.155751    3988 round_trippers.go:580]     Content-Length: 3921
	I1014 08:25:51.155751    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:25:51 GMT
	I1014 08:25:51.155751    3988 round_trippers.go:580]     Audit-Id: e28809a5-8622-4a99-b8cd-45aadb2e1be1
	I1014 08:25:51.155751    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:25:51.155751    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:25:51.155751    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:25:51.155861    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"611","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I1014 08:25:51.643319    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:25:51.643319    3988 round_trippers.go:469] Request Headers:
	I1014 08:25:51.643319    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:25:51.643319    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:25:51.647332    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:25:51.648329    3988 round_trippers.go:577] Response Headers:
	I1014 08:25:51.648380    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:25:51.648380    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:25:51.648380    3988 round_trippers.go:580]     Content-Length: 3921
	I1014 08:25:51.648380    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:25:51 GMT
	I1014 08:25:51.648380    3988 round_trippers.go:580]     Audit-Id: 5aee2610-10a2-436c-ba34-3308d05d7426
	I1014 08:25:51.648380    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:25:51.648380    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:25:51.648587    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"611","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I1014 08:25:52.146246    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:25:52.146246    3988 round_trippers.go:469] Request Headers:
	I1014 08:25:52.146246    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:25:52.146246    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:25:52.150844    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:25:52.150968    3988 round_trippers.go:577] Response Headers:
	I1014 08:25:52.150968    3988 round_trippers.go:580]     Content-Length: 3921
	I1014 08:25:52.150968    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:25:52 GMT
	I1014 08:25:52.150968    3988 round_trippers.go:580]     Audit-Id: 1dac4f37-dd5c-4d33-aaeb-1a82171d18f8
	I1014 08:25:52.150968    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:25:52.150968    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:25:52.150968    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:25:52.150968    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:25:52.151122    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"611","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I1014 08:25:52.643812    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:25:52.643812    3988 round_trippers.go:469] Request Headers:
	I1014 08:25:52.643812    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:25:52.643812    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:25:52.648448    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:25:52.648448    3988 round_trippers.go:577] Response Headers:
	I1014 08:25:52.648448    3988 round_trippers.go:580]     Audit-Id: 630abbcb-539b-44c2-a763-022197a89733
	I1014 08:25:52.648448    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:25:52.648448    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:25:52.648448    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:25:52.648591    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:25:52.648620    3988 round_trippers.go:580]     Content-Length: 3921
	I1014 08:25:52.648620    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:25:52 GMT
	I1014 08:25:52.648738    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"611","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I1014 08:25:53.143085    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:25:53.143085    3988 round_trippers.go:469] Request Headers:
	I1014 08:25:53.143085    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:25:53.143085    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:25:53.147658    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:25:53.147730    3988 round_trippers.go:577] Response Headers:
	I1014 08:25:53.147730    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:25:53.147730    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:25:53.147730    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:25:53.147730    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:25:53.147730    3988 round_trippers.go:580]     Content-Length: 3921
	I1014 08:25:53.147730    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:25:53 GMT
	I1014 08:25:53.147730    3988 round_trippers.go:580]     Audit-Id: ed9cb77d-031d-48dc-9b80-87854a83fc0f
	I1014 08:25:53.147730    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"611","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I1014 08:25:53.148364    3988 node_ready.go:53] node "multinode-671000-m02" has status "Ready":"False"
	I1014 08:25:53.643752    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:25:53.643752    3988 round_trippers.go:469] Request Headers:
	I1014 08:25:53.643752    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:25:53.643752    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:25:53.648386    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:25:53.648386    3988 round_trippers.go:577] Response Headers:
	I1014 08:25:53.648386    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:25:53.648386    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:25:53.648386    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:25:53.648386    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:25:53.648386    3988 round_trippers.go:580]     Content-Length: 3921
	I1014 08:25:53.648386    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:25:53 GMT
	I1014 08:25:53.648386    3988 round_trippers.go:580]     Audit-Id: 26572d1f-ff57-4ff5-88f6-916f273c2968
	I1014 08:25:53.648386    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"611","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I1014 08:25:54.142809    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:25:54.142809    3988 round_trippers.go:469] Request Headers:
	I1014 08:25:54.142809    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:25:54.142809    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:25:54.337508    3988 round_trippers.go:574] Response Status: 200 OK in 194 milliseconds
	I1014 08:25:54.337596    3988 round_trippers.go:577] Response Headers:
	I1014 08:25:54.337596    3988 round_trippers.go:580]     Audit-Id: a76d357d-5828-479c-a1b6-97448e4d6a7a
	I1014 08:25:54.337686    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:25:54.337686    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:25:54.337686    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:25:54.337686    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:25:54.337686    3988 round_trippers.go:580]     Content-Length: 4030
	I1014 08:25:54.337686    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:25:54 GMT
	I1014 08:25:54.337902    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"617","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I1014 08:25:54.643716    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:25:54.643716    3988 round_trippers.go:469] Request Headers:
	I1014 08:25:54.643716    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:25:54.643716    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:25:54.647634    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:25:54.647634    3988 round_trippers.go:577] Response Headers:
	I1014 08:25:54.647634    3988 round_trippers.go:580]     Content-Length: 4030
	I1014 08:25:54.647634    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:25:54 GMT
	I1014 08:25:54.647802    3988 round_trippers.go:580]     Audit-Id: bcea0d23-5a05-4831-9fe8-0ddc71330c8e
	I1014 08:25:54.647802    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:25:54.647802    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:25:54.647802    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:25:54.647802    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:25:54.647802    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"617","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I1014 08:25:55.143736    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:25:55.143821    3988 round_trippers.go:469] Request Headers:
	I1014 08:25:55.143821    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:25:55.143821    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:25:55.147159    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:25:55.148208    3988 round_trippers.go:577] Response Headers:
	I1014 08:25:55.148265    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:25:55.148265    3988 round_trippers.go:580]     Content-Length: 4030
	I1014 08:25:55.148265    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:25:55 GMT
	I1014 08:25:55.148265    3988 round_trippers.go:580]     Audit-Id: bfe91cf7-23a9-4a99-b2df-d5828bf92073
	I1014 08:25:55.148265    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:25:55.148265    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:25:55.148265    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:25:55.148558    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"617","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I1014 08:25:55.149029    3988 node_ready.go:53] node "multinode-671000-m02" has status "Ready":"False"
	I1014 08:25:55.642958    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:25:55.642958    3988 round_trippers.go:469] Request Headers:
	I1014 08:25:55.642958    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:25:55.642958    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:25:55.647658    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:25:55.647658    3988 round_trippers.go:577] Response Headers:
	I1014 08:25:55.647747    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:25:55.647747    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:25:55.647747    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:25:55.647747    3988 round_trippers.go:580]     Content-Length: 4030
	I1014 08:25:55.647747    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:25:55 GMT
	I1014 08:25:55.647747    3988 round_trippers.go:580]     Audit-Id: 5e0dec6f-208e-49d7-803a-156fec0afc74
	I1014 08:25:55.647747    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:25:55.647917    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"617","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I1014 08:25:56.231881    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:25:56.231881    3988 round_trippers.go:469] Request Headers:
	I1014 08:25:56.232893    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:25:56.232893    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:25:56.237898    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:25:56.237898    3988 round_trippers.go:577] Response Headers:
	I1014 08:25:56.237898    3988 round_trippers.go:580]     Audit-Id: 349c417c-09c7-4834-a7b5-882f9e9ceca5
	I1014 08:25:56.237898    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:25:56.237898    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:25:56.237898    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:25:56.237898    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:25:56.237898    3988 round_trippers.go:580]     Content-Length: 4030
	I1014 08:25:56.237898    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:25:56 GMT
	I1014 08:25:56.237898    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"617","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I1014 08:25:56.647893    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:25:56.647893    3988 round_trippers.go:469] Request Headers:
	I1014 08:25:56.647893    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:25:56.647893    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:25:56.652182    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:25:56.652253    3988 round_trippers.go:577] Response Headers:
	I1014 08:25:56.652253    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:25:56.652253    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:25:56.652253    3988 round_trippers.go:580]     Content-Length: 4030
	I1014 08:25:56.652253    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:25:56 GMT
	I1014 08:25:56.652253    3988 round_trippers.go:580]     Audit-Id: 8b0d7600-9328-44df-a4bc-5cb59ea1f088
	I1014 08:25:56.652253    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:25:56.652253    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:25:56.652253    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"617","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I1014 08:25:57.142765    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:25:57.142765    3988 round_trippers.go:469] Request Headers:
	I1014 08:25:57.142765    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:25:57.142765    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:25:57.148034    3988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:25:57.148034    3988 round_trippers.go:577] Response Headers:
	I1014 08:25:57.148034    3988 round_trippers.go:580]     Audit-Id: 87614dde-a529-4720-86ca-10e1bfdf8b5f
	I1014 08:25:57.148034    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:25:57.148034    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:25:57.148034    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:25:57.148191    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:25:57.148191    3988 round_trippers.go:580]     Content-Length: 4030
	I1014 08:25:57.148191    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:25:57 GMT
	I1014 08:25:57.148362    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"617","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I1014 08:25:57.642995    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:25:57.642995    3988 round_trippers.go:469] Request Headers:
	I1014 08:25:57.643458    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:25:57.643458    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:25:57.648216    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:25:57.648286    3988 round_trippers.go:577] Response Headers:
	I1014 08:25:57.648286    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:25:57.648286    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:25:57.648286    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:25:57.648354    3988 round_trippers.go:580]     Content-Length: 4030
	I1014 08:25:57.648354    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:25:57 GMT
	I1014 08:25:57.648354    3988 round_trippers.go:580]     Audit-Id: 29f93a57-4d6a-40d4-8ff9-277ec5701787
	I1014 08:25:57.648354    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:25:57.648603    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"617","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I1014 08:25:57.648845    3988 node_ready.go:53] node "multinode-671000-m02" has status "Ready":"False"
	I1014 08:25:58.143283    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:25:58.143283    3988 round_trippers.go:469] Request Headers:
	I1014 08:25:58.143283    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:25:58.143283    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:25:58.148100    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:25:58.148197    3988 round_trippers.go:577] Response Headers:
	I1014 08:25:58.148197    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:25:58.148197    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:25:58.148197    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:25:58.148296    3988 round_trippers.go:580]     Content-Length: 4030
	I1014 08:25:58.148296    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:25:58 GMT
	I1014 08:25:58.148296    3988 round_trippers.go:580]     Audit-Id: ff466a13-705b-4b6b-b65e-b71e0b0471c5
	I1014 08:25:58.148296    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:25:58.148389    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"617","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I1014 08:25:58.643883    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:25:58.644009    3988 round_trippers.go:469] Request Headers:
	I1014 08:25:58.644009    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:25:58.644009    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:25:58.647457    3988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:25:58.647499    3988 round_trippers.go:577] Response Headers:
	I1014 08:25:58.647499    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:25:58.647499    3988 round_trippers.go:580]     Content-Length: 4030
	I1014 08:25:58.647499    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:25:58 GMT
	I1014 08:25:58.647570    3988 round_trippers.go:580]     Audit-Id: 63ee69d1-3d27-420e-9088-30d4f8db40b3
	I1014 08:25:58.647570    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:25:58.647570    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:25:58.647570    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:25:58.647739    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"617","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I1014 08:25:59.143504    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:25:59.143504    3988 round_trippers.go:469] Request Headers:
	I1014 08:25:59.144011    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:25:59.144011    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:25:59.148526    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:25:59.148616    3988 round_trippers.go:577] Response Headers:
	I1014 08:25:59.148616    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:25:59.148616    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:25:59.148616    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:25:59.148616    3988 round_trippers.go:580]     Content-Length: 4030
	I1014 08:25:59.148616    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:25:59 GMT
	I1014 08:25:59.148616    3988 round_trippers.go:580]     Audit-Id: 8dd9d49f-3426-4ec5-b2b6-f5f6661dc558
	I1014 08:25:59.148616    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:25:59.148878    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"617","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I1014 08:25:59.642898    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:25:59.643433    3988 round_trippers.go:469] Request Headers:
	I1014 08:25:59.643433    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:25:59.643433    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:25:59.648093    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:25:59.648093    3988 round_trippers.go:577] Response Headers:
	I1014 08:25:59.648093    3988 round_trippers.go:580]     Content-Length: 4030
	I1014 08:25:59.648093    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:25:59 GMT
	I1014 08:25:59.648093    3988 round_trippers.go:580]     Audit-Id: 15081a22-2782-4fde-bc63-ee51c696af59
	I1014 08:25:59.648093    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:25:59.648093    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:25:59.648093    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:25:59.648093    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:25:59.648093    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"617","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I1014 08:26:00.144133    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:00.144236    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:00.144236    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:00.144236    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:00.153870    3988 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1014 08:26:00.153870    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:00.153870    3988 round_trippers.go:580]     Audit-Id: bbcfd87a-4aff-466f-8df2-df3fdf71483f
	I1014 08:26:00.153998    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:00.153998    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:00.153998    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:00.153998    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:00.153998    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:00 GMT
	I1014 08:26:00.154267    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:00.154724    3988 node_ready.go:53] node "multinode-671000-m02" has status "Ready":"False"
	I1014 08:26:00.643989    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:00.644131    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:00.644131    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:00.644131    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:00.647837    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:26:00.647837    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:00.647837    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:00.647837    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:00.647837    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:00.647837    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:00 GMT
	I1014 08:26:00.647837    3988 round_trippers.go:580]     Audit-Id: 3bbcf0b6-a651-4ad9-a12c-3a29cb7c247f
	I1014 08:26:00.647837    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:00.648119    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:01.142861    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:01.142861    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:01.142861    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:01.142861    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:01.147140    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:26:01.147243    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:01.147243    3988 round_trippers.go:580]     Audit-Id: 0d0ba22d-80b4-41da-b439-37d78b8198a1
	I1014 08:26:01.147243    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:01.147243    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:01.147243    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:01.147243    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:01.147243    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:01 GMT
	I1014 08:26:01.147579    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:01.642778    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:01.642778    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:01.642778    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:01.642778    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:01.650305    3988 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 08:26:01.650305    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:01.650305    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:01 GMT
	I1014 08:26:01.650305    3988 round_trippers.go:580]     Audit-Id: 27797d18-fe05-45ec-888b-b2d00df5987a
	I1014 08:26:01.650305    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:01.650305    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:01.650305    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:01.650305    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:01.650837    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:02.144704    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:02.144704    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:02.144704    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:02.144704    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:02.149840    3988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:26:02.149903    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:02.149903    3988 round_trippers.go:580]     Audit-Id: 273a969c-8e24-4d77-becb-8eb0a00f06a7
	I1014 08:26:02.149903    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:02.149903    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:02.149903    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:02.149903    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:02.149903    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:02 GMT
	I1014 08:26:02.151541    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:02.643174    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:02.643174    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:02.643174    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:02.643174    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:02.647614    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:26:02.647713    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:02.647713    3988 round_trippers.go:580]     Audit-Id: 80a2748b-691f-45c7-8f1d-7a5c502a8e45
	I1014 08:26:02.647713    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:02.647713    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:02.647713    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:02.647713    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:02.647713    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:02 GMT
	I1014 08:26:02.647713    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:02.648742    3988 node_ready.go:53] node "multinode-671000-m02" has status "Ready":"False"
	I1014 08:26:03.144346    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:03.144437    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:03.144437    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:03.144437    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:03.148857    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:26:03.149098    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:03.149098    3988 round_trippers.go:580]     Audit-Id: 743b8cbc-1f8e-420d-9999-46b26d3e0693
	I1014 08:26:03.149098    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:03.149098    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:03.149098    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:03.149161    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:03.149161    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:03 GMT
	I1014 08:26:03.149518    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:03.643426    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:03.643426    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:03.643426    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:03.643426    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:03.648095    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:26:03.648231    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:03.648231    3988 round_trippers.go:580]     Audit-Id: e1fb0fc2-fd15-4158-bbdd-697ba8f07a0d
	I1014 08:26:03.648231    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:03.648231    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:03.648231    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:03.648231    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:03.648231    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:03 GMT
	I1014 08:26:03.648591    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:04.143328    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:04.143328    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:04.143328    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:04.143328    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:04.148389    3988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:26:04.148964    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:04.148964    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:04.149046    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:04.149046    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:04.149046    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:04 GMT
	I1014 08:26:04.149046    3988 round_trippers.go:580]     Audit-Id: 0a363671-2270-49f2-ad0d-6de37c4a4da6
	I1014 08:26:04.149046    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:04.149157    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:04.643392    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:04.643392    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:04.643392    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:04.643491    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:04.646978    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:26:04.646978    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:04.646978    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:04.646978    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:04.646978    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:04 GMT
	I1014 08:26:04.646978    3988 round_trippers.go:580]     Audit-Id: 7833f019-60b4-4efe-9350-d6505a14aaf3
	I1014 08:26:04.646978    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:04.646978    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:04.647210    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:05.143584    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:05.143584    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:05.143584    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:05.143584    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:05.148884    3988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:26:05.148884    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:05.148884    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:05.148884    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:05.148884    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:05.148884    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:05 GMT
	I1014 08:26:05.148884    3988 round_trippers.go:580]     Audit-Id: 36aa1f62-21d6-4df0-90fd-6b62de342207
	I1014 08:26:05.148884    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:05.150962    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:05.150962    3988 node_ready.go:53] node "multinode-671000-m02" has status "Ready":"False"
	I1014 08:26:05.643880    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:05.643880    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:05.643880    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:05.643994    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:05.647368    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:26:05.647368    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:05.647368    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:05.647505    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:05 GMT
	I1014 08:26:05.647505    3988 round_trippers.go:580]     Audit-Id: 15af2d57-b3d6-470e-90ce-47e3dbeb36cf
	I1014 08:26:05.647505    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:05.647505    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:05.647505    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:05.647808    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:06.143438    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:06.143524    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:06.143524    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:06.143524    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:06.297743    3988 round_trippers.go:574] Response Status: 200 OK in 154 milliseconds
	I1014 08:26:06.297813    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:06.297881    3988 round_trippers.go:580]     Audit-Id: 85710c86-9da4-4dd3-ae48-cf2eb4e39037
	I1014 08:26:06.297881    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:06.297881    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:06.297881    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:06.297881    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:06.297881    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:06 GMT
	I1014 08:26:06.298500    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:06.643387    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:06.643387    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:06.643387    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:06.643387    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:06.647870    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:26:06.647930    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:06.647930    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:06.647930    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:06 GMT
	I1014 08:26:06.648004    3988 round_trippers.go:580]     Audit-Id: 7e2ffd99-38e1-4fbe-af9a-7fed0dc2e90b
	I1014 08:26:06.648004    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:06.648004    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:06.648004    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:06.648310    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:07.143314    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:07.143314    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:07.143314    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:07.143314    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:07.149540    3988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:26:07.149618    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:07.149618    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:07.149618    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:07.149618    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:07 GMT
	I1014 08:26:07.149618    3988 round_trippers.go:580]     Audit-Id: aa6c6be2-3037-43fb-bc3b-4d8e0a447ef6
	I1014 08:26:07.149618    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:07.149618    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:07.149885    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:07.643195    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:07.643725    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:07.643725    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:07.643725    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:07.647482    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:26:07.647482    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:07.647482    3988 round_trippers.go:580]     Audit-Id: 94538d31-70fc-4d73-bd63-0423e30cd918
	I1014 08:26:07.647482    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:07.647482    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:07.647482    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:07.647482    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:07.647626    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:07 GMT
	I1014 08:26:07.648193    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:07.648682    3988 node_ready.go:53] node "multinode-671000-m02" has status "Ready":"False"
	I1014 08:26:08.143226    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:08.143226    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:08.143226    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:08.143226    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:08.148338    3988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:26:08.148438    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:08.148438    3988 round_trippers.go:580]     Audit-Id: 8757d0f5-f072-449f-bcb1-f4e4241cb2a0
	I1014 08:26:08.148438    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:08.148438    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:08.148438    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:08.148438    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:08.148438    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:08 GMT
	I1014 08:26:08.148679    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:08.642826    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:08.642826    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:08.642826    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:08.642826    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:08.647424    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:26:08.647558    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:08.647558    3988 round_trippers.go:580]     Audit-Id: 17f94805-1bf6-4630-932c-134259ad0d00
	I1014 08:26:08.647558    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:08.647558    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:08.647558    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:08.647667    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:08.647667    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:08 GMT
	I1014 08:26:08.647936    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:09.143188    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:09.143188    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:09.143188    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:09.143188    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:09.149133    3988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:26:09.149133    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:09.149133    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:09.149133    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:09 GMT
	I1014 08:26:09.149133    3988 round_trippers.go:580]     Audit-Id: 50818a39-cc8c-4d1d-8b0e-9b9c17184f41
	I1014 08:26:09.149133    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:09.149274    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:09.149274    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:09.149516    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:09.643332    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:09.643814    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:09.643814    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:09.643814    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:09.647175    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:26:09.647251    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:09.647251    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:09.647251    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:09.647319    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:09 GMT
	I1014 08:26:09.647319    3988 round_trippers.go:580]     Audit-Id: e37358b7-1cf7-4fd5-ac29-d17944d66b52
	I1014 08:26:09.647319    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:09.647319    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:09.647536    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:10.142999    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:10.142999    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:10.142999    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:10.142999    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:10.148366    3988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:26:10.148466    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:10.148466    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:10.148554    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:10 GMT
	I1014 08:26:10.148554    3988 round_trippers.go:580]     Audit-Id: d0e37dc7-f8d0-4f7f-b452-4695a6f0370d
	I1014 08:26:10.148554    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:10.148690    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:10.148690    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:10.148823    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:10.149344    3988 node_ready.go:53] node "multinode-671000-m02" has status "Ready":"False"
	I1014 08:26:10.643481    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:10.643481    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:10.643481    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:10.643481    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:10.648195    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:26:10.648295    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:10.648295    3988 round_trippers.go:580]     Audit-Id: 61df3de1-8635-40db-96b6-9ce813a66ee7
	I1014 08:26:10.648295    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:10.648295    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:10.648295    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:10.648409    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:10.648434    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:10 GMT
	I1014 08:26:10.648650    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:11.143654    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:11.143654    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:11.143654    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:11.143654    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:11.148497    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:26:11.148497    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:11.148604    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:11.148604    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:11.148604    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:11 GMT
	I1014 08:26:11.148604    3988 round_trippers.go:580]     Audit-Id: 1b717639-ce2b-4de2-a515-a2cce31c6f66
	I1014 08:26:11.148604    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:11.148604    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:11.148886    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:11.643362    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:11.643425    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:11.643425    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:11.643425    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:11.646922    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:26:11.646922    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:11.646922    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:11.646922    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:11.646922    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:11 GMT
	I1014 08:26:11.646922    3988 round_trippers.go:580]     Audit-Id: 5305f7c2-3f51-4e0c-bbec-7b90233aba20
	I1014 08:26:11.646922    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:11.646922    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:11.647295    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:12.142820    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:12.143303    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:12.143303    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:12.143303    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:12.147235    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:26:12.147235    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:12.147235    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:12.147235    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:12.147235    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:12.147235    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:12 GMT
	I1014 08:26:12.147235    3988 round_trippers.go:580]     Audit-Id: 7f09c612-6908-470d-ae7f-04b486ecfca3
	I1014 08:26:12.147235    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:12.147235    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:12.643969    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:12.644061    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:12.644061    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:12.644061    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:12.648266    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:26:12.648336    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:12.648336    3988 round_trippers.go:580]     Audit-Id: a8e1e1f9-96ba-4879-b08e-fb720224a607
	I1014 08:26:12.648336    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:12.648398    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:12.648398    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:12.648398    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:12.648398    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:12 GMT
	I1014 08:26:12.648783    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:12.649260    3988 node_ready.go:53] node "multinode-671000-m02" has status "Ready":"False"
	I1014 08:26:13.143434    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:13.143434    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:13.143612    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:13.143612    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:13.149146    3988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:26:13.149146    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:13.149146    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:13.149146    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:13.149146    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:13 GMT
	I1014 08:26:13.149146    3988 round_trippers.go:580]     Audit-Id: 5075b6b6-f9b7-4bef-8035-bf1909adadae
	I1014 08:26:13.149146    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:13.149146    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:13.149146    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:13.642883    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:13.642883    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:13.642883    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:13.642883    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:13.647165    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:26:13.647246    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:13.647246    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:13.647326    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:13.647326    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:13.647326    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:13 GMT
	I1014 08:26:13.647326    3988 round_trippers.go:580]     Audit-Id: 64cb2494-2339-4993-b817-2d973182bba7
	I1014 08:26:13.647326    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:13.647571    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:14.144251    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:14.144251    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:14.144251    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:14.144251    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:14.148062    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:26:14.148062    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:14.148062    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:14.148062    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:14.148062    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:14 GMT
	I1014 08:26:14.148062    3988 round_trippers.go:580]     Audit-Id: d6d6b3d6-7c63-4008-b7db-e3953b9df726
	I1014 08:26:14.148062    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:14.148062    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:14.148616    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:14.643098    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:14.643098    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:14.643098    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:14.643098    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:14.647353    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:26:14.647353    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:14.647353    3988 round_trippers.go:580]     Audit-Id: 7c2bdc02-edf8-4b5f-97c8-50bcb4ef7aed
	I1014 08:26:14.647353    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:14.647353    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:14.647477    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:14.647477    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:14.647477    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:14 GMT
	I1014 08:26:14.647629    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:15.143122    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:15.143122    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:15.143122    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:15.143122    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:15.148697    3988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:26:15.148814    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:15.148814    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:15.148814    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:15.148814    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:15.148814    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:15.148814    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:15 GMT
	I1014 08:26:15.148908    3988 round_trippers.go:580]     Audit-Id: 000210f1-8a3b-4ac8-935e-90a9f7f39ac7
	I1014 08:26:15.149167    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:15.149608    3988 node_ready.go:53] node "multinode-671000-m02" has status "Ready":"False"
	I1014 08:26:15.642778    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:15.644870    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:15.644870    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:15.644870    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:15.649029    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:26:15.649029    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:15.649157    3988 round_trippers.go:580]     Audit-Id: 7fe29099-f5cf-4994-a0ef-6aa15be73790
	I1014 08:26:15.649157    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:15.649157    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:15.649157    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:15.649157    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:15.649157    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:15 GMT
	I1014 08:26:15.649398    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:16.143691    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:16.143691    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:16.143691    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:16.143691    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:16.148189    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:26:16.148277    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:16.148354    3988 round_trippers.go:580]     Audit-Id: d5a060c5-6886-42a7-97c7-e4ae286bbbc3
	I1014 08:26:16.148354    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:16.148354    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:16.148354    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:16.148354    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:16.148528    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:16 GMT
	I1014 08:26:16.148766    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:16.643225    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:16.643319    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:16.643319    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:16.643405    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:16.647422    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:26:16.647422    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:16.647532    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:16 GMT
	I1014 08:26:16.647532    3988 round_trippers.go:580]     Audit-Id: e66fefbf-786d-4e13-9347-7256c1d737dd
	I1014 08:26:16.647532    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:16.647532    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:16.647532    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:16.647532    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:16.647995    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:17.142826    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:17.142826    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:17.142826    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:17.142826    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:17.147365    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:26:17.147365    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:17.147458    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:17.147458    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:17.147458    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:17.147458    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:17 GMT
	I1014 08:26:17.147458    3988 round_trippers.go:580]     Audit-Id: ce50e4a9-e64f-450f-a830-24ac51b03b27
	I1014 08:26:17.147458    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:17.147896    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:17.644268    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:17.644369    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:17.644369    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:17.644369    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:17.649094    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:26:17.649094    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:17.649094    3988 round_trippers.go:580]     Audit-Id: c5e69375-4410-4fe9-9a0e-3f3088612100
	I1014 08:26:17.649094    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:17.649094    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:17.649094    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:17.649094    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:17.649186    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:17 GMT
	I1014 08:26:17.649521    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:17.650132    3988 node_ready.go:53] node "multinode-671000-m02" has status "Ready":"False"
	I1014 08:26:18.143665    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:18.143665    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:18.143665    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:18.143665    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:18.148975    3988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:26:18.149134    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:18.149134    3988 round_trippers.go:580]     Audit-Id: 4d519148-8627-4900-8562-cdb3993fecfd
	I1014 08:26:18.149134    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:18.149134    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:18.149134    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:18.149134    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:18.149134    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:18 GMT
	I1014 08:26:18.149448    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:18.643344    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:18.643344    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:18.643344    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:18.643344    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:18.646950    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:26:18.647070    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:18.647070    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:18 GMT
	I1014 08:26:18.647070    3988 round_trippers.go:580]     Audit-Id: 79159866-4179-4cfc-88e2-c66a1bf46c88
	I1014 08:26:18.647070    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:18.647070    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:18.647070    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:18.647070    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:18.647521    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"626","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I1014 08:26:19.144035    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:19.144035    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:19.144035    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:19.144035    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:19.147589    3988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:26:19.147669    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:19.147669    3988 round_trippers.go:580]     Audit-Id: 6f967286-d676-4500-8a7d-7453e069e67e
	I1014 08:26:19.147669    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:19.147669    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:19.147669    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:19.147669    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:19.147669    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:19 GMT
	I1014 08:26:19.147669    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"657","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3144 chars]
	I1014 08:26:19.148380    3988 node_ready.go:49] node "multinode-671000-m02" has status "Ready":"True"
	I1014 08:26:19.148380    3988 node_ready.go:38] duration metric: took 28.0056611s for node "multinode-671000-m02" to be "Ready" ...
	I1014 08:26:19.148487    3988 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 08:26:19.148654    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/namespaces/kube-system/pods
	I1014 08:26:19.148654    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:19.148654    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:19.148654    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:19.153266    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:26:19.153291    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:19.153291    3988 round_trippers.go:580]     Audit-Id: 868787b9-f787-4454-8b5d-fe7826a08f67
	I1014 08:26:19.153291    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:19.153354    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:19.153354    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:19.153354    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:19.153354    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:19 GMT
	I1014 08:26:19.154053    3988 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"657"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"451","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 72687 chars]
	I1014 08:26:19.158115    3988 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace to be "Ready" ...
	I1014 08:26:19.158115    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:26:19.158115    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:19.158115    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:19.158115    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:19.160960    3988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:26:19.161598    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:19.161598    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:19 GMT
	I1014 08:26:19.161598    3988 round_trippers.go:580]     Audit-Id: ed1f6967-4472-4123-ae78-a69c79ba2a92
	I1014 08:26:19.161598    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:19.161598    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:19.161598    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:19.161598    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:19.161832    3988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"451","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6836 chars]
	I1014 08:26:19.162634    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:26:19.162699    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:19.162699    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:19.162699    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:19.164900    3988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:26:19.164900    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:19.164900    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:19.164900    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:19.164900    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:19.164900    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:19.164900    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:19 GMT
	I1014 08:26:19.164900    3988 round_trippers.go:580]     Audit-Id: d519658d-6e8c-4345-a15f-f27add47801a
	I1014 08:26:19.164900    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"458","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I1014 08:26:19.165926    3988 pod_ready.go:93] pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace has status "Ready":"True"
	I1014 08:26:19.165926    3988 pod_ready.go:82] duration metric: took 7.8118ms for pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace to be "Ready" ...
	I1014 08:26:19.165926    3988 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:26:19.165926    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-671000
	I1014 08:26:19.165926    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:19.165926    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:19.165926    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:19.169535    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:26:19.169535    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:19.169535    3988 round_trippers.go:580]     Audit-Id: e7a92f4d-b813-4cbf-8fff-a6632d1e3e98
	I1014 08:26:19.169535    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:19.169535    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:19.169535    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:19.169535    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:19.169535    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:19 GMT
	I1014 08:26:19.169535    3988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-671000","namespace":"kube-system","uid":"56dfdf16-1224-41e3-94de-9d7f4021a17d","resourceVersion":"409","creationTimestamp":"2024-10-14T15:22:39Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.100.167:2379","kubernetes.io/config.hash":"778fdb620bffec66f911bf24e3c8210b","kubernetes.io/config.mirror":"778fdb620bffec66f911bf24e3c8210b","kubernetes.io/config.seen":"2024-10-14T15:22:39.775208719Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6476 chars]
	I1014 08:26:19.170288    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:26:19.170288    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:19.170288    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:19.170288    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:19.173028    3988 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:26:19.173305    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:19.173305    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:19 GMT
	I1014 08:26:19.173305    3988 round_trippers.go:580]     Audit-Id: f36171f4-de53-42e8-ab1d-bfd6c6228081
	I1014 08:26:19.173305    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:19.173305    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:19.173305    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:19.173305    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:19.173613    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"458","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I1014 08:26:19.174027    3988 pod_ready.go:93] pod "etcd-multinode-671000" in "kube-system" namespace has status "Ready":"True"
	I1014 08:26:19.174095    3988 pod_ready.go:82] duration metric: took 8.1689ms for pod "etcd-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:26:19.174095    3988 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:26:19.174258    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-671000
	I1014 08:26:19.174258    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:19.174258    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:19.174258    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:19.181017    3988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:26:19.181017    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:19.181017    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:19.181017    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:19 GMT
	I1014 08:26:19.181017    3988 round_trippers.go:580]     Audit-Id: 6d37f4d0-edb6-4133-b66d-145af2c05e41
	I1014 08:26:19.181017    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:19.181017    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:19.181017    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:19.181663    3988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-671000","namespace":"kube-system","uid":"80ea37b8-9db1-4a39-9e9e-51c01edadfb1","resourceVersion":"370","creationTimestamp":"2024-10-14T15:22:39Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.100.167:8443","kubernetes.io/config.hash":"3cf38ccc62eb74f6e658e1f66ae8cab1","kubernetes.io/config.mirror":"3cf38ccc62eb74f6e658e1f66ae8cab1","kubernetes.io/config.seen":"2024-10-14T15:22:39.775211919Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7704 chars]
	I1014 08:26:19.181968    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:26:19.182512    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:19.182512    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:19.182568    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:19.185730    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:26:19.185774    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:19.185774    3988 round_trippers.go:580]     Audit-Id: a3cebfbe-2132-4fa4-9e79-9878e0bdf164
	I1014 08:26:19.185774    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:19.185831    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:19.185831    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:19.185831    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:19.185831    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:19 GMT
	I1014 08:26:19.185908    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"458","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I1014 08:26:19.185908    3988 pod_ready.go:93] pod "kube-apiserver-multinode-671000" in "kube-system" namespace has status "Ready":"True"
	I1014 08:26:19.186443    3988 pod_ready.go:82] duration metric: took 12.3473ms for pod "kube-apiserver-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:26:19.186443    3988 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:26:19.186623    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-671000
	I1014 08:26:19.186658    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:19.186658    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:19.186658    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:19.192672    3988 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:26:19.192672    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:19.192672    3988 round_trippers.go:580]     Audit-Id: a7fe863f-b26d-4366-a199-79994cb58f76
	I1014 08:26:19.192672    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:19.192672    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:19.192672    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:19.192672    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:19.192672    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:19 GMT
	I1014 08:26:19.192927    3988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-671000","namespace":"kube-system","uid":"a5c7bb80-c844-476f-ba47-1cd4e599b92d","resourceVersion":"406","creationTimestamp":"2024-10-14T15:22:39Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b83e31864e0ae98d29d960866012ecb0","kubernetes.io/config.mirror":"b83e31864e0ae98d29d960866012ecb0","kubernetes.io/config.seen":"2024-10-14T15:22:39.775213119Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7269 chars]
	I1014 08:26:19.192927    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:26:19.192927    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:19.192927    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:19.192927    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:19.196680    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:26:19.196680    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:19.196759    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:19.196759    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:19 GMT
	I1014 08:26:19.196759    3988 round_trippers.go:580]     Audit-Id: a8acf661-8012-4cd2-8c9e-a574e04047c6
	I1014 08:26:19.196759    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:19.196759    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:19.196759    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:19.197017    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"458","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I1014 08:26:19.197419    3988 pod_ready.go:93] pod "kube-controller-manager-multinode-671000" in "kube-system" namespace has status "Ready":"True"
	I1014 08:26:19.197523    3988 pod_ready.go:82] duration metric: took 11.0237ms for pod "kube-controller-manager-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:26:19.197523    3988 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kbpjf" in "kube-system" namespace to be "Ready" ...
	I1014 08:26:19.343747    3988 request.go:632] Waited for 146.1638ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.100.167:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kbpjf
	I1014 08:26:19.343747    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kbpjf
	I1014 08:26:19.344291    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:19.344322    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:19.344322    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:19.349016    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:26:19.349016    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:19.349016    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:19.349016    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:19.349016    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:19 GMT
	I1014 08:26:19.349016    3988 round_trippers.go:580]     Audit-Id: 95dd9a20-e25c-4089-b452-76196a37a6fe
	I1014 08:26:19.349016    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:19.349016    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:19.349541    3988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kbpjf","generateName":"kube-proxy-","namespace":"kube-system","uid":"004b7f38-fa3b-4c2c-9524-8d5b1ba514e9","resourceVersion":"631","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e5217416-1ed2-4ea5-ae9b-ee7bcdba0d2a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e5217416-1ed2-4ea5-ae9b-ee7bcdba0d2a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6207 chars]
	I1014 08:26:19.544691    3988 request.go:632] Waited for 194.2392ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:19.545180    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:26:19.545180    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:19.545261    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:19.545261    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:19.549815    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:26:19.549815    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:19.549815    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:19.549815    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:19.549901    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:19 GMT
	I1014 08:26:19.549901    3988 round_trippers.go:580]     Audit-Id: d83bd352-f5a6-40f2-ac41-f937892d1609
	I1014 08:26:19.549901    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:19.549901    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:19.549901    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"657","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3144 chars]
	I1014 08:26:19.550437    3988 pod_ready.go:93] pod "kube-proxy-kbpjf" in "kube-system" namespace has status "Ready":"True"
	I1014 08:26:19.550603    3988 pod_ready.go:82] duration metric: took 352.9133ms for pod "kube-proxy-kbpjf" in "kube-system" namespace to be "Ready" ...
	I1014 08:26:19.550603    3988 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r74dx" in "kube-system" namespace to be "Ready" ...
	I1014 08:26:19.743786    3988 request.go:632] Waited for 193.0827ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.100.167:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r74dx
	I1014 08:26:19.744329    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r74dx
	I1014 08:26:19.744395    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:19.744395    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:19.744395    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:19.758210    3988 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I1014 08:26:19.758286    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:19.758286    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:19.758286    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:19.758286    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:19 GMT
	I1014 08:26:19.758286    3988 round_trippers.go:580]     Audit-Id: 833eef9e-8a1e-441f-b25c-923ceb581165
	I1014 08:26:19.758286    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:19.758286    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:19.758521    3988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-r74dx","generateName":"kube-proxy-","namespace":"kube-system","uid":"f8d14473-8859-4015-84e9-d00656cc00c9","resourceVersion":"402","creationTimestamp":"2024-10-14T15:22:44Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e5217416-1ed2-4ea5-ae9b-ee7bcdba0d2a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e5217416-1ed2-4ea5-ae9b-ee7bcdba0d2a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6199 chars]
	I1014 08:26:19.944775    3988 request.go:632] Waited for 185.4924ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:26:19.944775    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:26:19.944775    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:19.944775    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:19.944775    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:19.949722    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:26:19.949790    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:19.949790    3988 round_trippers.go:580]     Audit-Id: 772cdc3c-b172-443d-8367-b65acc1db934
	I1014 08:26:19.949790    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:19.949790    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:19.949790    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:19.949855    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:19.949855    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:19 GMT
	I1014 08:26:19.950125    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"458","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I1014 08:26:19.950647    3988 pod_ready.go:93] pod "kube-proxy-r74dx" in "kube-system" namespace has status "Ready":"True"
	I1014 08:26:19.950647    3988 pod_ready.go:82] duration metric: took 400.0432ms for pod "kube-proxy-r74dx" in "kube-system" namespace to be "Ready" ...
	I1014 08:26:19.950718    3988 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:26:20.143948    3988 request.go:632] Waited for 193.1221ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.100.167:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-671000
	I1014 08:26:20.143948    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-671000
	I1014 08:26:20.143948    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:20.143948    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:20.143948    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:20.148090    3988 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:26:20.148142    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:20.148142    3988 round_trippers.go:580]     Audit-Id: 15f6a8fc-2fc1-4d69-98a9-6173f1421514
	I1014 08:26:20.148210    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:20.148210    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:20.148210    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:20.148210    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:20.148210    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:20 GMT
	I1014 08:26:20.148581    3988 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-671000","namespace":"kube-system","uid":"97febcab-f54d-4338-ba7c-2dc5e69b77fc","resourceVersion":"421","creationTimestamp":"2024-10-14T15:22:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3e987cfaedc75c39145e8fc131c60c81","kubernetes.io/config.mirror":"3e987cfaedc75c39145e8fc131c60c81","kubernetes.io/config.seen":"2024-10-14T15:22:32.104995089Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4999 chars]
	I1014 08:26:20.343651    3988 request.go:632] Waited for 194.6209ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:26:20.343651    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes/multinode-671000
	I1014 08:26:20.343651    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:20.343651    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:20.343651    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:20.348736    3988 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:26:20.348736    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:20.348736    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:20.348806    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:20.348806    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:20.348806    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:20.348806    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:20 GMT
	I1014 08:26:20.348806    3988 round_trippers.go:580]     Audit-Id: 4e8537cb-a30a-4c2b-be73-0033115b41ea
	I1014 08:26:20.348955    3988 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"458","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I1014 08:26:20.349874    3988 pod_ready.go:93] pod "kube-scheduler-multinode-671000" in "kube-system" namespace has status "Ready":"True"
	I1014 08:26:20.349945    3988 pod_ready.go:82] duration metric: took 399.2269ms for pod "kube-scheduler-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:26:20.349945    3988 pod_ready.go:39] duration metric: took 1.2014567s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 08:26:20.350020    3988 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 08:26:20.361078    3988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 08:26:20.390035    3988 system_svc.go:56] duration metric: took 40.0147ms WaitForService to wait for kubelet
	I1014 08:26:20.390684    3988 kubeadm.go:582] duration metric: took 29.5045249s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 08:26:20.390684    3988 node_conditions.go:102] verifying NodePressure condition ...
	I1014 08:26:20.544048    3988 request.go:632] Waited for 153.3642ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.100.167:8443/api/v1/nodes
	I1014 08:26:20.544570    3988 round_trippers.go:463] GET https://172.20.100.167:8443/api/v1/nodes
	I1014 08:26:20.544570    3988 round_trippers.go:469] Request Headers:
	I1014 08:26:20.544570    3988 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:26:20.544570    3988 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:26:20.548797    3988 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:26:20.548797    3988 round_trippers.go:577] Response Headers:
	I1014 08:26:20.548797    3988 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:26:20 GMT
	I1014 08:26:20.548797    3988 round_trippers.go:580]     Audit-Id: ee9184ac-bb4d-4aae-9bec-b2df90a273b9
	I1014 08:26:20.548877    3988 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:26:20.548877    3988 round_trippers.go:580]     Content-Type: application/json
	I1014 08:26:20.548877    3988 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:26:20.548877    3988 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:26:20.549432    3988 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"660"},"items":[{"metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"458","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9662 chars]
	I1014 08:26:20.550216    3988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 08:26:20.550284    3988 node_conditions.go:123] node cpu capacity is 2
	I1014 08:26:20.550284    3988 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 08:26:20.550350    3988 node_conditions.go:123] node cpu capacity is 2
	I1014 08:26:20.550350    3988 node_conditions.go:105] duration metric: took 159.6659ms to run NodePressure ...
	I1014 08:26:20.550350    3988 start.go:241] waiting for startup goroutines ...
	I1014 08:26:20.550424    3988 start.go:255] writing updated cluster config ...
	I1014 08:26:20.562656    3988 ssh_runner.go:195] Run: rm -f paused
	I1014 08:26:20.710719    3988 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 08:26:20.715329    3988 out.go:177] * Done! kubectl is now configured to use "multinode-671000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Oct 14 15:23:07 multinode-671000 dockerd[1443]: time="2024-10-14T15:23:07.217809728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 15:23:07 multinode-671000 dockerd[1443]: time="2024-10-14T15:23:07.235828647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 15:23:07 multinode-671000 dockerd[1443]: time="2024-10-14T15:23:07.235966146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 15:23:07 multinode-671000 dockerd[1443]: time="2024-10-14T15:23:07.236044345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 15:23:07 multinode-671000 dockerd[1443]: time="2024-10-14T15:23:07.236155044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 15:23:07 multinode-671000 cri-dockerd[1336]: time="2024-10-14T15:23:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1863de70f2316e54fa61ef7c5c6aba94808669b81b1cc811dce745011ee807cb/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 15:23:07 multinode-671000 cri-dockerd[1336]: time="2024-10-14T15:23:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2f8cc9a218fef4446839b0253827fc2dd89f8eccbd66ab7f0e5123c033f0aacc/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 15:23:07 multinode-671000 dockerd[1443]: time="2024-10-14T15:23:07.570732064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 15:23:07 multinode-671000 dockerd[1443]: time="2024-10-14T15:23:07.570970962Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 15:23:07 multinode-671000 dockerd[1443]: time="2024-10-14T15:23:07.570993462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 15:23:07 multinode-671000 dockerd[1443]: time="2024-10-14T15:23:07.573644840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 15:23:07 multinode-671000 dockerd[1443]: time="2024-10-14T15:23:07.685811302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 15:23:07 multinode-671000 dockerd[1443]: time="2024-10-14T15:23:07.686272098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 15:23:07 multinode-671000 dockerd[1443]: time="2024-10-14T15:23:07.686510796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 15:23:07 multinode-671000 dockerd[1443]: time="2024-10-14T15:23:07.686986792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 15:26:45 multinode-671000 dockerd[1443]: time="2024-10-14T15:26:45.036746498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 15:26:45 multinode-671000 dockerd[1443]: time="2024-10-14T15:26:45.036981299Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 15:26:45 multinode-671000 dockerd[1443]: time="2024-10-14T15:26:45.037070099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 15:26:45 multinode-671000 dockerd[1443]: time="2024-10-14T15:26:45.037845702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 15:26:45 multinode-671000 cri-dockerd[1336]: time="2024-10-14T15:26:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/06e529266db4b2cf3ad09285cddeb89d7d11e858b5156cf73f41198c9500b9f2/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 14 15:26:46 multinode-671000 cri-dockerd[1336]: time="2024-10-14T15:26:46Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Oct 14 15:26:46 multinode-671000 dockerd[1443]: time="2024-10-14T15:26:46.976167388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 15:26:46 multinode-671000 dockerd[1443]: time="2024-10-14T15:26:46.976402688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 15:26:46 multinode-671000 dockerd[1443]: time="2024-10-14T15:26:46.976435988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 15:26:46 multinode-671000 dockerd[1443]: time="2024-10-14T15:26:46.976536387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cbf0b40e378b9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   48 seconds ago      Running             busybox                   0                   06e529266db4b       busybox-7dff88458-vlp7j
	d9831e9f8ce8c       c69fa2e9cbf5f                                                                                         4 minutes ago       Running             coredns                   0                   2f8cc9a218fef       coredns-7c65d6cfc9-fs9ct
	3d8b7bae48a59       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   1863de70f2316       storage-provisioner
	fcdf89a3ac8ce       kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387              4 minutes ago       Running             kindnet-cni               0                   5e48ddcfdf90a       kindnet-wqrx6
	ea19428d70363       60c005f310ff3                                                                                         4 minutes ago       Running             kube-proxy                0                   7144d8ce208cf       kube-proxy-r74dx
	661e75bbf6b46       9aa1fad941575                                                                                         5 minutes ago       Running             kube-scheduler            0                   2dc78387553ff       kube-scheduler-multinode-671000
	712aad669c9f6       175ffd71cce3d                                                                                         5 minutes ago       Running             kube-controller-manager   0                   bfdde08319e32       kube-controller-manager-multinode-671000
	1ba3cd8bbd596       2e96e5913fc06                                                                                         5 minutes ago       Running             etcd                      0                   d5733d27d2f1c       etcd-multinode-671000
	0b5a6e440d7b6       6bab7719df100                                                                                         5 minutes ago       Running             kube-apiserver            0                   2c6be2bd1889b       kube-apiserver-multinode-671000
	
	
	==> coredns [d9831e9f8ce8] <==
	[INFO] 10.244.1.2:51603 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001109s
	[INFO] 10.244.0.3:37516 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002057s
	[INFO] 10.244.0.3:41090 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001329s
	[INFO] 10.244.0.3:45330 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001397s
	[INFO] 10.244.0.3:46520 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000154699s
	[INFO] 10.244.0.3:40342 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001347s
	[INFO] 10.244.0.3:60385 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001518s
	[INFO] 10.244.0.3:56338 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001527s
	[INFO] 10.244.0.3:50480 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0002505s
	[INFO] 10.244.1.2:60726 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002297s
	[INFO] 10.244.1.2:44347 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002249s
	[INFO] 10.244.1.2:49092 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001075s
	[INFO] 10.244.1.2:54937 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068s
	[INFO] 10.244.0.3:59749 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002552s
	[INFO] 10.244.0.3:56008 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002499s
	[INFO] 10.244.0.3:60338 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001065s
	[INFO] 10.244.0.3:49712 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001634s
	[INFO] 10.244.1.2:56508 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001577s
	[INFO] 10.244.1.2:45387 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001545s
	[INFO] 10.244.1.2:39608 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001419s
	[INFO] 10.244.1.2:53878 - 5 "PTR IN 1.96.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0001923s
	[INFO] 10.244.0.3:58589 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002828s
	[INFO] 10.244.0.3:58608 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001609s
	[INFO] 10.244.0.3:52599 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000633s
	[INFO] 10.244.0.3:58233 - 5 "PTR IN 1.96.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0001058s
	
	
	==> describe nodes <==
	Name:               multinode-671000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-671000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=multinode-671000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_14T08_22_41_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 15:22:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-671000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 15:27:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 15:27:15 +0000   Mon, 14 Oct 2024 15:22:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 15:27:15 +0000   Mon, 14 Oct 2024 15:22:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 15:27:15 +0000   Mon, 14 Oct 2024 15:22:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 15:27:15 +0000   Mon, 14 Oct 2024 15:23:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.100.167
	  Hostname:    multinode-671000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 3ca7679283e4497f8e60c7cb90c15442
	  System UUID:                f72bc210-2ad8-1e4f-ad7d-3e75159c3d98
	  Boot ID:                    4fd64e6d-4470-4de6-b367-793138f19607
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vlp7j                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 coredns-7c65d6cfc9-fs9ct                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m49s
	  kube-system                 etcd-multinode-671000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m55s
	  kube-system                 kindnet-wqrx6                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m50s
	  kube-system                 kube-apiserver-multinode-671000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 kube-controller-manager-multinode-671000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 kube-proxy-r74dx                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 kube-scheduler-multinode-671000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m46s                kube-proxy       
	  Normal  Starting                 5m2s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m2s (x8 over 5m2s)  kubelet          Node multinode-671000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m2s (x8 over 5m2s)  kubelet          Node multinode-671000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m2s (x7 over 5m2s)  kubelet          Node multinode-671000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m55s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m55s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m55s                kubelet          Node multinode-671000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m55s                kubelet          Node multinode-671000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m55s                kubelet          Node multinode-671000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m50s                node-controller  Node multinode-671000 event: Registered Node multinode-671000 in Controller
	  Normal  NodeReady                4m28s                kubelet          Node multinode-671000 status is now: NodeReady
	
	
	Name:               multinode-671000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-671000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=multinode-671000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_14T08_25_50_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 15:25:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-671000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 15:27:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 15:26:50 +0000   Mon, 14 Oct 2024 15:25:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 15:26:50 +0000   Mon, 14 Oct 2024 15:25:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 15:26:50 +0000   Mon, 14 Oct 2024 15:25:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 15:26:50 +0000   Mon, 14 Oct 2024 15:26:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.109.137
	  Hostname:    multinode-671000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 d57a3470ff3c4f03abf025a19c5c23d9
	  System UUID:                d36e677f-0d87-a94f-af28-d8324326f88f
	  Boot ID:                    33649cab-156e-4789-8adf-a6fc060b3de7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-bnqj6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 kindnet-rgbjf              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      105s
	  kube-system                 kube-proxy-kbpjf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 93s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  105s (x2 over 105s)  kubelet          Node multinode-671000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    105s (x2 over 105s)  kubelet          Node multinode-671000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     105s (x2 over 105s)  kubelet          Node multinode-671000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  105s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           100s                 node-controller  Node multinode-671000-m02 event: Registered Node multinode-671000-m02 in Controller
	  Normal  NodeReady                76s                  kubelet          Node multinode-671000-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.019677] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct14 15:21] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.182316] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[Oct14 15:22] systemd-fstab-generator[1011]: Ignoring "noauto" option for root device
	[  +0.099965] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.538566] systemd-fstab-generator[1050]: Ignoring "noauto" option for root device
	[  +0.198095] systemd-fstab-generator[1062]: Ignoring "noauto" option for root device
	[  +0.234617] systemd-fstab-generator[1076]: Ignoring "noauto" option for root device
	[  +2.846776] systemd-fstab-generator[1289]: Ignoring "noauto" option for root device
	[  +0.191219] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.205992] systemd-fstab-generator[1313]: Ignoring "noauto" option for root device
	[  +0.264737] systemd-fstab-generator[1328]: Ignoring "noauto" option for root device
	[ +11.100944] systemd-fstab-generator[1429]: Ignoring "noauto" option for root device
	[  +0.104180] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.581390] systemd-fstab-generator[1680]: Ignoring "noauto" option for root device
	[  +5.249370] systemd-fstab-generator[1821]: Ignoring "noauto" option for root device
	[  +0.102727] kauditd_printk_skb: 70 callbacks suppressed
	[  +8.049888] systemd-fstab-generator[2219]: Ignoring "noauto" option for root device
	[  +0.148609] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.566194] systemd-fstab-generator[2323]: Ignoring "noauto" option for root device
	[  +0.227564] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.226544] kauditd_printk_skb: 51 callbacks suppressed
	[Oct14 15:26] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [1ba3cd8bbd59] <==
	{"level":"info","ts":"2024-10-14T15:22:45.418379Z","caller":"traceutil/trace.go:171","msg":"trace[1419061663] transaction","detail":"{read_only:false; response_revision:375; number_of_response:1; }","duration":"102.979131ms","start":"2024-10-14T15:22:45.315375Z","end":"2024-10-14T15:22:45.418354Z","steps":["trace[1419061663] 'process raft request'  (duration: 50.611605ms)","trace[1419061663] 'compare'  (duration: 51.544164ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-14T15:23:21.083197Z","caller":"traceutil/trace.go:171","msg":"trace[990111955] transaction","detail":"{read_only:false; response_revision:466; number_of_response:1; }","duration":"117.078732ms","start":"2024-10-14T15:23:20.966082Z","end":"2024-10-14T15:23:21.083161Z","steps":["trace[990111955] 'process raft request'  (duration: 116.848628ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T15:25:43.434194Z","caller":"traceutil/trace.go:171","msg":"trace[1181646043] linearizableReadLoop","detail":"{readStateIndex:621; appliedIndex:620; }","duration":"394.954662ms","start":"2024-10-14T15:25:43.039220Z","end":"2024-10-14T15:25:43.434174Z","steps":["trace[1181646043] 'read index received'  (duration: 394.756961ms)","trace[1181646043] 'applied index is now lower than readState.Index'  (duration: 197.001µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-14T15:25:43.434340Z","caller":"traceutil/trace.go:171","msg":"trace[857485273] transaction","detail":"{read_only:false; response_revision:578; number_of_response:1; }","duration":"417.132266ms","start":"2024-10-14T15:25:43.017180Z","end":"2024-10-14T15:25:43.434312Z","steps":["trace[857485273] 'process raft request'  (duration: 416.809865ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T15:25:43.434914Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.793996ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-10-14T15:25:43.435861Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"396.517869ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T15:25:43.435996Z","caller":"traceutil/trace.go:171","msg":"trace[2103150444] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:578; }","duration":"396.76977ms","start":"2024-10-14T15:25:43.039216Z","end":"2024-10-14T15:25:43.435986Z","steps":["trace[2103150444] 'agreement among raft nodes before linearized reading'  (duration: 396.481469ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T15:25:43.436057Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-14T15:25:43.039190Z","time spent":"396.85677ms","remote":"127.0.0.1:36118","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":27,"request content":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true "}
	{"level":"info","ts":"2024-10-14T15:25:43.436326Z","caller":"traceutil/trace.go:171","msg":"trace[1451001340] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:578; }","duration":"254.985902ms","start":"2024-10-14T15:25:43.180956Z","end":"2024-10-14T15:25:43.435942Z","steps":["trace[1451001340] 'agreement among raft nodes before linearized reading'  (duration: 253.760596ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T15:25:43.435545Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-14T15:25:43.017165Z","time spent":"417.201767ms","remote":"127.0.0.1:36058","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1101,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:577 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-14T15:25:43.436440Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"239.584929ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T15:25:43.437842Z","caller":"traceutil/trace.go:171","msg":"trace[538722744] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:578; }","duration":"241.003336ms","start":"2024-10-14T15:25:43.196830Z","end":"2024-10-14T15:25:43.437833Z","steps":["trace[538722744] 'agreement among raft nodes before linearized reading'  (duration: 239.517829ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T15:25:54.334795Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.572959ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4142065410675148253 > lease_revoke:<id:397b928b9fb9854f>","response":"size:27"}
	{"level":"info","ts":"2024-10-14T15:25:54.335464Z","caller":"traceutil/trace.go:171","msg":"trace[1097985957] linearizableReadLoop","detail":"{readStateIndex:665; appliedIndex:662; }","duration":"187.256129ms","start":"2024-10-14T15:25:54.148194Z","end":"2024-10-14T15:25:54.335450Z","steps":["trace[1097985957] 'read index received'  (duration: 82.881067ms)","trace[1097985957] 'applied index is now lower than readState.Index'  (duration: 104.374362ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-14T15:25:54.335579Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.684431ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-671000-m02\" ","response":"range_response_count:1 size:2848"}
	{"level":"info","ts":"2024-10-14T15:25:54.335603Z","caller":"traceutil/trace.go:171","msg":"trace[537771586] range","detail":"{range_begin:/registry/minions/multinode-671000-m02; range_end:; response_count:1; response_revision:618; }","duration":"187.716631ms","start":"2024-10-14T15:25:54.147879Z","end":"2024-10-14T15:25:54.335596Z","steps":["trace[537771586] 'agreement among raft nodes before linearized reading'  (duration: 187.64833ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T15:25:54.336215Z","caller":"traceutil/trace.go:171","msg":"trace[1982019829] transaction","detail":"{read_only:false; response_revision:616; number_of_response:1; }","duration":"274.055713ms","start":"2024-10-14T15:25:54.062149Z","end":"2024-10-14T15:25:54.336205Z","steps":["trace[1982019829] 'process raft request'  (duration: 273.099809ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T15:25:54.336555Z","caller":"traceutil/trace.go:171","msg":"trace[1700008751] transaction","detail":"{read_only:false; response_revision:617; number_of_response:1; }","duration":"259.225447ms","start":"2024-10-14T15:25:54.077321Z","end":"2024-10-14T15:25:54.336547Z","steps":["trace[1700008751] 'process raft request'  (duration: 258.054642ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T15:25:54.336867Z","caller":"traceutil/trace.go:171","msg":"trace[1230617130] transaction","detail":"{read_only:false; response_revision:618; number_of_response:1; }","duration":"164.493428ms","start":"2024-10-14T15:25:54.172365Z","end":"2024-10-14T15:25:54.336859Z","steps":["trace[1230617130] 'process raft request'  (duration: 163.055422ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T15:25:54.342753Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"161.602115ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T15:25:54.342798Z","caller":"traceutil/trace.go:171","msg":"trace[63029805] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:618; }","duration":"161.649216ms","start":"2024-10-14T15:25:54.181139Z","end":"2024-10-14T15:25:54.342789Z","steps":["trace[63029805] 'agreement among raft nodes before linearized reading'  (duration: 161.589615ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T15:26:06.295299Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.439677ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T15:26:06.296602Z","caller":"traceutil/trace.go:171","msg":"trace[1476383392] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:638; }","duration":"115.718182ms","start":"2024-10-14T15:26:06.180839Z","end":"2024-10-14T15:26:06.296557Z","steps":["trace[1476383392] 'range keys from in-memory index tree'  (duration: 114.433077ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T15:26:06.295315Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.766715ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-671000-m02\" ","response":"range_response_count:1 size:3149"}
	{"level":"info","ts":"2024-10-14T15:26:06.297208Z","caller":"traceutil/trace.go:171","msg":"trace[362240605] range","detail":"{range_begin:/registry/minions/multinode-671000-m02; range_end:; response_count:1; response_revision:638; }","duration":"149.660923ms","start":"2024-10-14T15:26:06.147535Z","end":"2024-10-14T15:26:06.297196Z","steps":["trace[362240605] 'range keys from in-memory index tree'  (duration: 147.661615ms)"],"step_count":1}
	
	
	==> kernel <==
	 15:27:34 up 7 min,  0 users,  load average: 0.22, 0.27, 0.15
	Linux multinode-671000 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [fcdf89a3ac8c] <==
	I1014 15:26:24.862167       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 15:26:34.871429       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 15:26:34.871485       1 main.go:300] handling current node
	I1014 15:26:34.871504       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 15:26:34.871511       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 15:26:44.871191       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 15:26:44.871261       1 main.go:300] handling current node
	I1014 15:26:44.871279       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 15:26:44.871287       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 15:26:54.862964       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 15:26:54.863337       1 main.go:300] handling current node
	I1014 15:26:54.863390       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 15:26:54.863401       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 15:27:04.867284       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 15:27:04.868018       1 main.go:300] handling current node
	I1014 15:27:04.868212       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 15:27:04.868462       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 15:27:14.863392       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 15:27:14.863454       1 main.go:300] handling current node
	I1014 15:27:14.864034       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 15:27:14.864068       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 15:27:24.868357       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 15:27:24.868462       1 main.go:300] handling current node
	I1014 15:27:24.868501       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 15:27:24.868510       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [0b5a6e440d7b] <==
	I1014 15:22:37.166587       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1014 15:22:37.176635       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1014 15:22:37.177048       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 15:22:38.425288       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 15:22:38.498456       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 15:22:38.603833       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1014 15:22:38.618662       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.20.100.167]
	I1014 15:22:38.620575       1 controller.go:615] quota admission added evaluator for: endpoints
	I1014 15:22:38.629737       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 15:22:39.216069       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1014 15:22:39.727518       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1014 15:22:39.762438       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1014 15:22:39.797810       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1014 15:22:44.767442       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I1014 15:22:45.027328       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E1014 15:26:50.491514       1 conn.go:339] Error on socket receive: read tcp 172.20.100.167:8443->172.20.96.1:58130: use of closed network connection
	E1014 15:26:50.930965       1 conn.go:339] Error on socket receive: read tcp 172.20.100.167:8443->172.20.96.1:58132: use of closed network connection
	E1014 15:26:51.569414       1 conn.go:339] Error on socket receive: read tcp 172.20.100.167:8443->172.20.96.1:58134: use of closed network connection
	E1014 15:26:52.018221       1 conn.go:339] Error on socket receive: read tcp 172.20.100.167:8443->172.20.96.1:58136: use of closed network connection
	E1014 15:26:52.455295       1 conn.go:339] Error on socket receive: read tcp 172.20.100.167:8443->172.20.96.1:58138: use of closed network connection
	E1014 15:26:52.889971       1 conn.go:339] Error on socket receive: read tcp 172.20.100.167:8443->172.20.96.1:58140: use of closed network connection
	E1014 15:26:53.675413       1 conn.go:339] Error on socket receive: read tcp 172.20.100.167:8443->172.20.96.1:58143: use of closed network connection
	E1014 15:27:04.113268       1 conn.go:339] Error on socket receive: read tcp 172.20.100.167:8443->172.20.96.1:58145: use of closed network connection
	E1014 15:27:04.540295       1 conn.go:339] Error on socket receive: read tcp 172.20.100.167:8443->172.20.96.1:58148: use of closed network connection
	E1014 15:27:14.977468       1 conn.go:339] Error on socket receive: read tcp 172.20.100.167:8443->172.20.96.1:58150: use of closed network connection
	
	
	==> kube-controller-manager [712aad669c9f] <==
	I1014 15:23:10.921399       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 15:25:49.920125       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m02\" does not exist"
	I1014 15:25:49.955308       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-671000-m02" podCIDRs=["10.244.1.0/24"]
	I1014 15:25:49.956041       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 15:25:49.956493       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 15:25:50.332394       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 15:25:50.885049       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 15:25:54.059204       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000-m02"
	I1014 15:25:54.342262       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 15:26:00.157293       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 15:26:18.720546       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 15:26:18.720611       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 15:26:18.738467       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 15:26:19.084143       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 15:26:20.411603       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 15:26:44.435156       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="75.721873ms"
	I1014 15:26:44.496244       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="60.852418ms"
	I1014 15:26:44.496945       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="131.501µs"
	I1014 15:26:44.540742       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="117.6µs"
	I1014 15:26:47.680512       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.465591ms"
	I1014 15:26:47.680616       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="27.8µs"
	I1014 15:26:47.878633       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.308091ms"
	I1014 15:26:47.878779       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="30.7µs"
	I1014 15:26:50.724728       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 15:27:15.823577       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	
	
	==> kube-proxy [ea19428d7036] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1014 15:22:47.546236       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1014 15:22:47.606437       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.20.100.167"]
	E1014 15:22:47.606505       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 15:22:47.788755       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 15:22:47.788820       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 15:22:47.788852       1 server_linux.go:169] "Using iptables Proxier"
	I1014 15:22:47.807050       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 15:22:47.807424       1 server.go:483] "Version info" version="v1.31.1"
	I1014 15:22:47.807440       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 15:22:47.810361       1 config.go:199] "Starting service config controller"
	I1014 15:22:47.810416       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 15:22:47.810600       1 config.go:105] "Starting endpoint slice config controller"
	I1014 15:22:47.810629       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 15:22:47.811297       1 config.go:328] "Starting node config controller"
	I1014 15:22:47.811309       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 15:22:47.911443       1 shared_informer.go:320] Caches are synced for node config
	I1014 15:22:47.911791       1 shared_informer.go:320] Caches are synced for service config
	I1014 15:22:47.911866       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [661e75bbf6b4] <==
	W1014 15:22:37.205810       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1014 15:22:37.206152       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 15:22:37.269786       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1014 15:22:37.269856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 15:22:37.270258       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1014 15:22:37.270286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 15:22:37.290857       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1014 15:22:37.290997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 15:22:37.304519       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1014 15:22:37.305020       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1014 15:22:37.328746       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1014 15:22:37.329049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 15:22:37.338059       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1014 15:22:37.338269       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 15:22:37.394728       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1014 15:22:37.394803       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 15:22:37.445455       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1014 15:22:37.445612       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 15:22:37.490285       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1014 15:22:37.490817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 15:22:37.537304       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1014 15:22:37.537396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 15:22:37.713713       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1014 15:22:37.713777       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 15:22:39.593596       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 14 15:23:06 multinode-671000 kubelet[2226]: I1014 15:23:06.747155    2226 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk57d\" (UniqueName: \"kubernetes.io/projected/fde8ff75-bc7f-4db4-b098-c3a08b38d205-kube-api-access-sk57d\") pod \"storage-provisioner\" (UID: \"fde8ff75-bc7f-4db4-b098-c3a08b38d205\") " pod="kube-system/storage-provisioner"
	Oct 14 15:23:06 multinode-671000 kubelet[2226]: I1014 15:23:06.747194    2226 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fde8ff75-bc7f-4db4-b098-c3a08b38d205-tmp\") pod \"storage-provisioner\" (UID: \"fde8ff75-bc7f-4db4-b098-c3a08b38d205\") " pod="kube-system/storage-provisioner"
	Oct 14 15:23:08 multinode-671000 kubelet[2226]: I1014 15:23:08.866948    2226 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.866930782 podStartE2EDuration="16.866930782s" podCreationTimestamp="2024-10-14 15:22:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-14 15:23:07.840963305 +0000 UTC m=+28.224967161" watchObservedRunningTime="2024-10-14 15:23:08.866930782 +0000 UTC m=+29.250934738"
	Oct 14 15:23:08 multinode-671000 kubelet[2226]: I1014 15:23:08.867053    2226 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podStartSLOduration=23.867046785 podStartE2EDuration="23.867046785s" podCreationTimestamp="2024-10-14 15:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-14 15:23:08.865699253 +0000 UTC m=+29.249703109" watchObservedRunningTime="2024-10-14 15:23:08.867046785 +0000 UTC m=+29.251050641"
	Oct 14 15:23:39 multinode-671000 kubelet[2226]: E1014 15:23:39.912302    2226 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 15:23:39 multinode-671000 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 15:23:39 multinode-671000 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 15:23:39 multinode-671000 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 15:23:39 multinode-671000 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 15:24:39 multinode-671000 kubelet[2226]: E1014 15:24:39.913035    2226 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 15:24:39 multinode-671000 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 15:24:39 multinode-671000 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 15:24:39 multinode-671000 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 15:24:39 multinode-671000 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 15:25:39 multinode-671000 kubelet[2226]: E1014 15:25:39.913045    2226 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 15:25:39 multinode-671000 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 15:25:39 multinode-671000 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 15:25:39 multinode-671000 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 15:25:39 multinode-671000 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 15:26:39 multinode-671000 kubelet[2226]: E1014 15:26:39.913129    2226 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 15:26:39 multinode-671000 kubelet[2226]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 15:26:39 multinode-671000 kubelet[2226]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 15:26:39 multinode-671000 kubelet[2226]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 15:26:39 multinode-671000 kubelet[2226]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 15:26:44 multinode-671000 kubelet[2226]: I1014 15:26:44.554853    2226 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46k9l\" (UniqueName: \"kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l\") pod \"busybox-7dff88458-vlp7j\" (UID: \"99068807-9f92-42f1-a1a0-fb6e533dc61a\") " pod="default/busybox-7dff88458-vlp7j"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-671000 -n multinode-671000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-671000 -n multinode-671000: (11.6326901s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-671000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (55.60s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (479.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-671000
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-671000
E1014 08:43:40.471879     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-671000: (1m36.7820741s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-671000 --wait=true -v=8 --alsologtostderr
E1014 08:44:10.851693     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 08:45:37.382223     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 08:49:10.851326     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-671000 --wait=true -v=8 --alsologtostderr: exit status 1 (5m32.400871s)

                                                
                                                
-- stdout --
	* [multinode-671000] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-671000" primary control-plane node in "multinode-671000" cluster
	* Restarting existing hyperv VM for "multinode-671000" ...
	* Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-671000-m02" worker node in "multinode-671000" cluster
	* Restarting existing hyperv VM for "multinode-671000-m02" ...
	* Found network options:
	  - NO_PROXY=172.20.106.123
	  - NO_PROXY=172.20.106.123
	* Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	  - env NO_PROXY=172.20.106.123

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 08:44:07.389674   15224 out.go:345] Setting OutFile to fd 1804 ...
	I1014 08:44:07.390740   15224 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 08:44:07.390740   15224 out.go:358] Setting ErrFile to fd 972...
	I1014 08:44:07.390740   15224 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 08:44:07.415971   15224 out.go:352] Setting JSON to false
	I1014 08:44:07.420984   15224 start.go:129] hostinfo: {"hostname":"minikube1","uptime":106161,"bootTime":1728814485,"procs":201,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5011 Build 19045.5011","kernelVersion":"10.0.19045.5011 Build 19045.5011","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1014 08:44:07.421993   15224 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 08:44:07.534857   15224 out.go:177] * [multinode-671000] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	I1014 08:44:07.543901   15224 notify.go:220] Checking for updates...
	I1014 08:44:07.549086   15224 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 08:44:07.555416   15224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 08:44:07.585693   15224 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1014 08:44:07.605965   15224 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 08:44:07.620024   15224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 08:44:07.631387   15224 config.go:182] Loaded profile config "multinode-671000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 08:44:07.631791   15224 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 08:44:13.384132   15224 out.go:177] * Using the hyperv driver based on existing profile
	I1014 08:44:13.393772   15224 start.go:297] selected driver: hyperv
	I1014 08:44:13.393772   15224 start.go:901] validating driver "hyperv" against &{Name:multinode-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:multinode-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.100.167 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.109.137 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.102.29 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:fals
e ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 08:44:13.394206   15224 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 08:44:13.459167   15224 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 08:44:13.459337   15224 cni.go:84] Creating CNI manager for ""
	I1014 08:44:13.459337   15224 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1014 08:44:13.459620   15224 start.go:340] cluster config:
	{Name:multinode-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-671000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.100.167 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.109.137 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.102.29 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio
-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 08:44:13.459651   15224 iso.go:125] acquiring lock: {Name:mkb71635bcbccba29dce9048ad4cc71430a7e577 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 08:44:13.545479   15224 out.go:177] * Starting "multinode-671000" primary control-plane node in "multinode-671000" cluster
	I1014 08:44:13.553332   15224 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 08:44:13.553562   15224 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1014 08:44:13.553562   15224 cache.go:56] Caching tarball of preloaded images
	I1014 08:44:13.553562   15224 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1014 08:44:13.554293   15224 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 08:44:13.554511   15224 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\config.json ...
	I1014 08:44:13.557472   15224 start.go:360] acquireMachinesLock for multinode-671000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 08:44:13.557472   15224 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-671000"
	I1014 08:44:13.558046   15224 start.go:96] Skipping create...Using existing machine configuration
	I1014 08:44:13.558046   15224 fix.go:54] fixHost starting: 
	I1014 08:44:13.558870   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:44:16.261637   15224 main.go:141] libmachine: [stdout =====>] : Off
	
	I1014 08:44:16.261637   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:16.262646   15224 fix.go:112] recreateIfNeeded on multinode-671000: state=Stopped err=<nil>
	W1014 08:44:16.262646   15224 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 08:44:16.293998   15224 out.go:177] * Restarting existing hyperv VM for "multinode-671000" ...
	I1014 08:44:16.384669   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-671000
	I1014 08:44:19.629584   15224 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:44:19.629732   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:19.629732   15224 main.go:141] libmachine: Waiting for host to start...
	I1014 08:44:19.629732   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:44:21.853637   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:44:21.854494   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:21.854566   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:44:24.301745   15224 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:44:24.301745   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:25.302201   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:44:27.422612   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:44:27.422612   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:27.422924   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:44:29.871404   15224 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:44:29.872460   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:30.873287   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:44:33.011425   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:44:33.011631   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:33.011677   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:44:35.443734   15224 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:44:35.443734   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:36.444215   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:44:38.627293   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:44:38.627351   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:38.627351   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:44:41.124871   15224 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:44:41.125002   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:42.125974   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:44:44.316671   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:44:44.316852   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:44.316852   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:44:46.942427   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:44:46.942427   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:46.945696   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:44:49.011131   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:44:49.011131   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:49.011131   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:44:51.492027   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:44:51.492027   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:51.492559   15224 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\config.json ...
	I1014 08:44:51.495554   15224 machine.go:93] provisionDockerMachine start ...
	I1014 08:44:51.496082   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:44:53.557335   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:44:53.557425   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:53.557626   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:44:56.041063   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:44:56.041063   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:56.047492   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:44:56.048427   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.106.123 22 <nil> <nil>}
	I1014 08:44:56.048460   15224 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 08:44:56.177780   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 08:44:56.177921   15224 buildroot.go:166] provisioning hostname "multinode-671000"
	I1014 08:44:56.177921   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:44:58.222838   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:44:58.222838   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:58.222838   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:45:00.709338   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:45:00.709338   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:00.716168   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:45:00.716859   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.106.123 22 <nil> <nil>}
	I1014 08:45:00.716859   15224 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-671000 && echo "multinode-671000" | sudo tee /etc/hostname
	I1014 08:45:00.863452   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-671000
	
	I1014 08:45:00.863530   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:45:02.987244   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:45:02.987365   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:02.987487   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:45:05.466484   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:45:05.466661   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:05.472466   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:45:05.473098   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.106.123 22 <nil> <nil>}
	I1014 08:45:05.473192   15224 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-671000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-671000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-671000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 08:45:05.623017   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 08:45:05.623107   15224 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1014 08:45:05.623107   15224 buildroot.go:174] setting up certificates
	I1014 08:45:05.623229   15224 provision.go:84] configureAuth start
	I1014 08:45:05.623301   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:45:07.693415   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:45:07.694278   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:07.694379   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:45:10.221863   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:45:10.221920   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:10.221920   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:45:12.270483   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:45:12.270483   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:12.270483   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:45:14.731822   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:45:14.732454   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:14.732454   15224 provision.go:143] copyHostCerts
	I1014 08:45:14.732638   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1014 08:45:14.732869   15224 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1014 08:45:14.732869   15224 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1014 08:45:14.733484   15224 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1014 08:45:14.734974   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1014 08:45:14.735172   15224 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1014 08:45:14.735172   15224 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1014 08:45:14.735172   15224 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1014 08:45:14.736608   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1014 08:45:14.736608   15224 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1014 08:45:14.736608   15224 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1014 08:45:14.737527   15224 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I1014 08:45:14.738625   15224 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-671000 san=[127.0.0.1 172.20.106.123 localhost minikube multinode-671000]
	I1014 08:45:14.822439   15224 provision.go:177] copyRemoteCerts
	I1014 08:45:14.832452   15224 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 08:45:14.833292   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:45:16.858535   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:45:16.858594   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:16.858594   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:45:19.312599   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:45:19.312671   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:19.312744   15224 sshutil.go:53] new ssh client: &{IP:172.20.106.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000\id_rsa Username:docker}
	I1014 08:45:19.418940   15224 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5864803s)
	I1014 08:45:19.419024   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1014 08:45:19.421274   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I1014 08:45:19.467514   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1014 08:45:19.467514   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 08:45:19.512423   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1014 08:45:19.513692   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 08:45:19.558955   15224 provision.go:87] duration metric: took 13.9356349s to configureAuth
	I1014 08:45:19.559019   15224 buildroot.go:189] setting minikube options for container-runtime
	I1014 08:45:19.559648   15224 config.go:182] Loaded profile config "multinode-671000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 08:45:19.559648   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:45:21.637227   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:45:21.638017   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:21.638080   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:45:24.073085   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:45:24.073890   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:24.084887   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:45:24.085628   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.106.123 22 <nil> <nil>}
	I1014 08:45:24.085628   15224 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1014 08:45:24.216534   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1014 08:45:24.216643   15224 buildroot.go:70] root file system type: tmpfs
	I1014 08:45:24.216959   15224 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1014 08:45:24.217137   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:45:26.234454   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:45:26.234591   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:26.234591   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:45:28.733290   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:45:28.733290   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:28.739195   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:45:28.740129   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.106.123 22 <nil> <nil>}
	I1014 08:45:28.740206   15224 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1014 08:45:28.895049   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1014 08:45:28.895170   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:45:30.970482   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:45:30.971402   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:30.971551   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:45:33.392031   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:45:33.392353   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:33.399014   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:45:33.399224   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.106.123 22 <nil> <nil>}
	I1014 08:45:33.399224   15224 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1014 08:45:35.856287   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1014 08:45:35.856287   15224 machine.go:96] duration metric: took 44.3606533s to provisionDockerMachine
	I1014 08:45:35.856287   15224 start.go:293] postStartSetup for "multinode-671000" (driver="hyperv")
	I1014 08:45:35.856287   15224 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 08:45:35.866878   15224 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 08:45:35.866878   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:45:37.902871   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:45:37.902871   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:37.903376   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:45:40.389463   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:45:40.389539   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:40.389539   15224 sshutil.go:53] new ssh client: &{IP:172.20.106.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000\id_rsa Username:docker}
	I1014 08:45:40.498571   15224 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.631685s)
	I1014 08:45:40.512486   15224 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 08:45:40.520099   15224 command_runner.go:130] > NAME=Buildroot
	I1014 08:45:40.520099   15224 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1014 08:45:40.520099   15224 command_runner.go:130] > ID=buildroot
	I1014 08:45:40.520099   15224 command_runner.go:130] > VERSION_ID=2023.02.9
	I1014 08:45:40.520200   15224 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1014 08:45:40.520478   15224 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 08:45:40.520550   15224 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1014 08:45:40.521350   15224 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1014 08:45:40.521914   15224 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> 9362.pem in /etc/ssl/certs
	I1014 08:45:40.521914   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /etc/ssl/certs/9362.pem
	I1014 08:45:40.533476   15224 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 08:45:40.553303   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /etc/ssl/certs/9362.pem (1708 bytes)
	I1014 08:45:40.600329   15224 start.go:296] duration metric: took 4.7440338s for postStartSetup
	I1014 08:45:40.600329   15224 fix.go:56] duration metric: took 1m27.0421262s for fixHost
	I1014 08:45:40.600329   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:45:42.636618   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:45:42.636671   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:42.636714   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:45:45.078391   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:45:45.079558   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:45.084901   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:45:45.085524   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.106.123 22 <nil> <nil>}
	I1014 08:45:45.085524   15224 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 08:45:45.218652   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728920745.219040960
	
	I1014 08:45:45.218652   15224 fix.go:216] guest clock: 1728920745.219040960
	I1014 08:45:45.218652   15224 fix.go:229] Guest: 2024-10-14 08:45:45.21904096 -0700 PDT Remote: 2024-10-14 08:45:40.6003296 -0700 PDT m=+93.303151401 (delta=4.61871136s)
	I1014 08:45:45.218949   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:45:47.298917   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:45:47.298917   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:47.299813   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:45:49.728125   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:45:49.728826   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:49.734542   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:45:49.734623   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.106.123 22 <nil> <nil>}
	I1014 08:45:49.734623   15224 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1728920745
	I1014 08:45:49.881262   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 14 15:45:45 UTC 2024
	
	I1014 08:45:49.881352   15224 fix.go:236] clock set: Mon Oct 14 15:45:45 UTC 2024
	 (err=<nil>)
	I1014 08:45:49.881352   15224 start.go:83] releasing machines lock for "multinode-671000", held for 1m36.323176s
	I1014 08:45:49.881526   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:45:51.958259   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:45:51.958682   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:51.958682   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:45:54.416595   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:45:54.416595   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:54.421939   15224 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1014 08:45:54.422094   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:45:54.431567   15224 ssh_runner.go:195] Run: cat /version.json
	I1014 08:45:54.431567   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:45:56.596858   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:45:56.596858   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:56.596858   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:45:56.596858   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:56.597666   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:45:56.597773   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:45:59.164179   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:45:59.164179   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:59.164179   15224 sshutil.go:53] new ssh client: &{IP:172.20.106.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000\id_rsa Username:docker}
	I1014 08:45:59.181617   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:45:59.181940   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:59.182091   15224 sshutil.go:53] new ssh client: &{IP:172.20.106.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000\id_rsa Username:docker}
	I1014 08:45:59.250252   15224 command_runner.go:130] > {"iso_version": "v1.34.0-1728382514-19774", "kicbase_version": "v0.0.45-1728063813-19756", "minikube_version": "v1.34.0", "commit": "cf9f11c2b0369efc07a929c4a1fdb2b4b3c62ee9"}
	I1014 08:45:59.250402   15224 ssh_runner.go:235] Completed: cat /version.json: (4.8188261s)
	I1014 08:45:59.264323   15224 ssh_runner.go:195] Run: systemctl --version
	I1014 08:45:59.268396   15224 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I1014 08:45:59.268396   15224 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8464155s)
	W1014 08:45:59.268396   15224 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1014 08:45:59.272830   15224 command_runner.go:130] > systemd 252 (252)
	I1014 08:45:59.272830   15224 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1014 08:45:59.284720   15224 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1014 08:45:59.292625   15224 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1014 08:45:59.293751   15224 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 08:45:59.304084   15224 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 08:45:59.331817   15224 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1014 08:45:59.331817   15224 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 08:45:59.331975   15224 start.go:495] detecting cgroup driver to use...
	I1014 08:45:59.332269   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 08:45:59.368515   15224 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	W1014 08:45:59.375133   15224 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1014 08:45:59.375133   15224 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1014 08:45:59.380886   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1014 08:45:59.411692   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1014 08:45:59.430899   15224 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 08:45:59.441646   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 08:45:59.470900   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 08:45:59.504488   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 08:45:59.533997   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 08:45:59.565330   15224 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 08:45:59.598642   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 08:45:59.629725   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 08:45:59.657570   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 08:45:59.688012   15224 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 08:45:59.705351   15224 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 08:45:59.705351   15224 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 08:45:59.715896   15224 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 08:45:59.748369   15224 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 08:45:59.773568   15224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:45:59.965755   15224 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 08:46:00.003898   15224 start.go:495] detecting cgroup driver to use...
	I1014 08:46:00.015390   15224 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1014 08:46:00.047005   15224 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1014 08:46:00.047071   15224 command_runner.go:130] > [Unit]
	I1014 08:46:00.047071   15224 command_runner.go:130] > Description=Docker Application Container Engine
	I1014 08:46:00.047071   15224 command_runner.go:130] > Documentation=https://docs.docker.com
	I1014 08:46:00.047071   15224 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1014 08:46:00.047071   15224 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1014 08:46:00.047071   15224 command_runner.go:130] > StartLimitBurst=3
	I1014 08:46:00.047156   15224 command_runner.go:130] > StartLimitIntervalSec=60
	I1014 08:46:00.047156   15224 command_runner.go:130] > [Service]
	I1014 08:46:00.047156   15224 command_runner.go:130] > Type=notify
	I1014 08:46:00.047156   15224 command_runner.go:130] > Restart=on-failure
	I1014 08:46:00.047156   15224 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1014 08:46:00.047156   15224 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1014 08:46:00.047241   15224 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1014 08:46:00.047241   15224 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1014 08:46:00.047241   15224 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1014 08:46:00.047241   15224 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1014 08:46:00.047241   15224 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1014 08:46:00.047351   15224 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1014 08:46:00.047415   15224 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1014 08:46:00.047415   15224 command_runner.go:130] > ExecStart=
	I1014 08:46:00.047467   15224 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1014 08:46:00.047545   15224 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1014 08:46:00.047583   15224 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1014 08:46:00.047583   15224 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1014 08:46:00.047583   15224 command_runner.go:130] > LimitNOFILE=infinity
	I1014 08:46:00.047583   15224 command_runner.go:130] > LimitNPROC=infinity
	I1014 08:46:00.047583   15224 command_runner.go:130] > LimitCORE=infinity
	I1014 08:46:00.047583   15224 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1014 08:46:00.047583   15224 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1014 08:46:00.047687   15224 command_runner.go:130] > TasksMax=infinity
	I1014 08:46:00.047717   15224 command_runner.go:130] > TimeoutStartSec=0
	I1014 08:46:00.047757   15224 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1014 08:46:00.047757   15224 command_runner.go:130] > Delegate=yes
	I1014 08:46:00.047757   15224 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1014 08:46:00.047757   15224 command_runner.go:130] > KillMode=process
	I1014 08:46:00.047757   15224 command_runner.go:130] > [Install]
	I1014 08:46:00.047844   15224 command_runner.go:130] > WantedBy=multi-user.target
	I1014 08:46:00.060088   15224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 08:46:00.091459   15224 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 08:46:00.136449   15224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 08:46:00.169625   15224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 08:46:00.202233   15224 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 08:46:00.263360   15224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 08:46:00.286997   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 08:46:00.317875   15224 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1014 08:46:00.327743   15224 ssh_runner.go:195] Run: which cri-dockerd
	I1014 08:46:00.333762   15224 command_runner.go:130] > /usr/bin/cri-dockerd
	I1014 08:46:00.345178   15224 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1014 08:46:00.365900   15224 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1014 08:46:00.403545   15224 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1014 08:46:00.603475   15224 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1014 08:46:00.793419   15224 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1014 08:46:00.793941   15224 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1014 08:46:00.836113   15224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:46:01.022899   15224 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 08:46:03.696947   15224 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6739455s)
	I1014 08:46:03.710831   15224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1014 08:46:03.744741   15224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 08:46:03.778138   15224 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1014 08:46:03.967436   15224 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1014 08:46:04.177295   15224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:46:04.380206   15224 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1014 08:46:04.426934   15224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 08:46:04.463406   15224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:46:04.662791   15224 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1014 08:46:04.769183   15224 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1014 08:46:04.779438   15224 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1014 08:46:04.790442   15224 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1014 08:46:04.790537   15224 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1014 08:46:04.790537   15224 command_runner.go:130] > Device: 0,22	Inode: 845         Links: 1
	I1014 08:46:04.790537   15224 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1014 08:46:04.790623   15224 command_runner.go:130] > Access: 2024-10-14 15:46:04.687166886 +0000
	I1014 08:46:04.790623   15224 command_runner.go:130] > Modify: 2024-10-14 15:46:04.687166886 +0000
	I1014 08:46:04.790623   15224 command_runner.go:130] > Change: 2024-10-14 15:46:04.692166888 +0000
	I1014 08:46:04.790623   15224 command_runner.go:130] >  Birth: -
	I1014 08:46:04.790623   15224 start.go:563] Will wait 60s for crictl version
	I1014 08:46:04.805088   15224 ssh_runner.go:195] Run: which crictl
	I1014 08:46:04.812980   15224 command_runner.go:130] > /usr/bin/crictl
	I1014 08:46:04.827838   15224 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 08:46:04.885551   15224 command_runner.go:130] > Version:  0.1.0
	I1014 08:46:04.885618   15224 command_runner.go:130] > RuntimeName:  docker
	I1014 08:46:04.885729   15224 command_runner.go:130] > RuntimeVersion:  27.3.1
	I1014 08:46:04.885729   15224 command_runner.go:130] > RuntimeApiVersion:  v1
	I1014 08:46:04.885793   15224 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1014 08:46:04.893380   15224 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 08:46:04.924622   15224 command_runner.go:130] > 27.3.1
	I1014 08:46:04.936682   15224 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 08:46:04.964825   15224 command_runner.go:130] > 27.3.1
	I1014 08:46:04.970480   15224 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1014 08:46:04.970606   15224 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1014 08:46:04.975359   15224 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1014 08:46:04.975359   15224 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1014 08:46:04.975359   15224 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1014 08:46:04.975663   15224 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:08:65:4d Flags:up|broadcast|multicast|running}
	I1014 08:46:04.978430   15224 ip.go:214] interface addr: fe80::b548:ff79:d464:75b8/64
	I1014 08:46:04.978430   15224 ip.go:214] interface addr: 172.20.96.1/20
	I1014 08:46:04.987521   15224 ssh_runner.go:195] Run: grep 172.20.96.1	host.minikube.internal$ /etc/hosts
	I1014 08:46:04.993528   15224 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 08:46:05.014457   15224 kubeadm.go:883] updating cluster {Name:multinode-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:multinode-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.106.123 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.109.137 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.102.29 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 08:46:05.014457   15224 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 08:46:05.024919   15224 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 08:46:05.053890   15224 command_runner.go:130] > kindest/kindnetd:v20241007-36f62932
	I1014 08:46:05.053890   15224 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I1014 08:46:05.053890   15224 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I1014 08:46:05.054023   15224 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 08:46:05.054023   15224 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I1014 08:46:05.054023   15224 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I1014 08:46:05.054023   15224 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I1014 08:46:05.054023   15224 command_runner.go:130] > registry.k8s.io/pause:3.10
	I1014 08:46:05.054023   15224 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 08:46:05.054023   15224 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1014 08:46:05.054187   15224 docker.go:689] Got preloaded images: -- stdout --
	kindest/kindnetd:v20241007-36f62932
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1014 08:46:05.054210   15224 docker.go:619] Images already preloaded, skipping extraction
	I1014 08:46:05.067803   15224 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 08:46:05.095328   15224 command_runner.go:130] > kindest/kindnetd:v20241007-36f62932
	I1014 08:46:05.095417   15224 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I1014 08:46:05.095417   15224 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I1014 08:46:05.095417   15224 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 08:46:05.095417   15224 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I1014 08:46:05.095491   15224 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I1014 08:46:05.095491   15224 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I1014 08:46:05.095491   15224 command_runner.go:130] > registry.k8s.io/pause:3.10
	I1014 08:46:05.095491   15224 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 08:46:05.095491   15224 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1014 08:46:05.095611   15224 docker.go:689] Got preloaded images: -- stdout --
	kindest/kindnetd:v20241007-36f62932
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1014 08:46:05.095683   15224 cache_images.go:84] Images are preloaded, skipping loading
	I1014 08:46:05.095753   15224 kubeadm.go:934] updating node { 172.20.106.123 8443 v1.31.1 docker true true} ...
	I1014 08:46:05.096021   15224 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-671000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.106.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 08:46:05.105582   15224 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1014 08:46:05.173403   15224 command_runner.go:130] > cgroupfs
	I1014 08:46:05.173658   15224 cni.go:84] Creating CNI manager for ""
	I1014 08:46:05.173728   15224 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1014 08:46:05.173853   15224 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 08:46:05.173929   15224 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.106.123 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-671000 NodeName:multinode-671000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.106.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.106.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 08:46:05.174405   15224 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.106.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-671000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.20.106.123"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.106.123"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 08:46:05.187718   15224 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 08:46:05.208520   15224 command_runner.go:130] > kubeadm
	I1014 08:46:05.209540   15224 command_runner.go:130] > kubectl
	I1014 08:46:05.209540   15224 command_runner.go:130] > kubelet
	I1014 08:46:05.209540   15224 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 08:46:05.221947   15224 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 08:46:05.238933   15224 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1014 08:46:05.269892   15224 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 08:46:05.304444   15224 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2300 bytes)
	I1014 08:46:05.350507   15224 ssh_runner.go:195] Run: grep 172.20.106.123	control-plane.minikube.internal$ /etc/hosts
	I1014 08:46:05.357197   15224 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.106.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 08:46:05.395114   15224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:46:05.594775   15224 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 08:46:05.622076   15224 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000 for IP: 172.20.106.123
	I1014 08:46:05.622269   15224 certs.go:194] generating shared ca certs ...
	I1014 08:46:05.622335   15224 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 08:46:05.623386   15224 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I1014 08:46:05.623972   15224 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I1014 08:46:05.623972   15224 certs.go:256] generating profile certs ...
	I1014 08:46:05.623972   15224 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\client.key
	I1014 08:46:05.625153   15224 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.key.e9b19ac9
	I1014 08:46:05.625279   15224 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.crt.e9b19ac9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.106.123]
	I1014 08:46:05.684226   15224 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.crt.e9b19ac9 ...
	I1014 08:46:05.684226   15224 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.crt.e9b19ac9: {Name:mk3795177dce49c783f9ee27d09e16b869d515a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 08:46:05.686235   15224 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.key.e9b19ac9 ...
	I1014 08:46:05.686235   15224 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.key.e9b19ac9: {Name:mkf4893f04bf939f2cb6f963f84b6c5956474043 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 08:46:05.686920   15224 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.crt.e9b19ac9 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.crt
	I1014 08:46:05.704929   15224 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.key.e9b19ac9 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.key
	I1014 08:46:05.706523   15224 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\proxy-client.key
	I1014 08:46:05.706644   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 08:46:05.706899   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1014 08:46:05.707047   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 08:46:05.707047   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 08:46:05.707047   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 08:46:05.707588   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 08:46:05.707828   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 08:46:05.707989   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 08:46:05.708208   15224 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem (1338 bytes)
	W1014 08:46:05.709086   15224 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936_empty.pem, impossibly tiny 0 bytes
	I1014 08:46:05.709214   15224 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1014 08:46:05.709214   15224 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1014 08:46:05.710051   15224 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1014 08:46:05.710359   15224 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1014 08:46:05.710530   15224 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem (1708 bytes)
	I1014 08:46:05.711260   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /usr/share/ca-certificates/9362.pem
	I1014 08:46:05.711299   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 08:46:05.711299   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem -> /usr/share/ca-certificates/936.pem
	I1014 08:46:05.712863   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 08:46:05.771770   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 08:46:05.822670   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 08:46:05.877047   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 08:46:05.938655   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 08:46:05.991183   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 08:46:06.043812   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 08:46:06.095230   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 08:46:06.145209   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /usr/share/ca-certificates/9362.pem (1708 bytes)
	I1014 08:46:06.192720   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 08:46:06.236591   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem --> /usr/share/ca-certificates/936.pem (1338 bytes)
	I1014 08:46:06.281178   15224 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 08:46:06.323802   15224 ssh_runner.go:195] Run: openssl version
	I1014 08:46:06.332179   15224 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1014 08:46:06.344790   15224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9362.pem && ln -fs /usr/share/ca-certificates/9362.pem /etc/ssl/certs/9362.pem"
	I1014 08:46:06.379138   15224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9362.pem
	I1014 08:46:06.386330   15224 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 14 14:00 /usr/share/ca-certificates/9362.pem
	I1014 08:46:06.386330   15224 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 14:00 /usr/share/ca-certificates/9362.pem
	I1014 08:46:06.397806   15224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9362.pem
	I1014 08:46:06.406914   15224 command_runner.go:130] > 3ec20f2e
	I1014 08:46:06.421441   15224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9362.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 08:46:06.452745   15224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 08:46:06.487518   15224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 08:46:06.495302   15224 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 14 13:44 /usr/share/ca-certificates/minikubeCA.pem
	I1014 08:46:06.495302   15224 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:44 /usr/share/ca-certificates/minikubeCA.pem
	I1014 08:46:06.505374   15224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 08:46:06.514465   15224 command_runner.go:130] > b5213941
	I1014 08:46:06.526108   15224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 08:46:06.554095   15224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936.pem && ln -fs /usr/share/ca-certificates/936.pem /etc/ssl/certs/936.pem"
	I1014 08:46:06.585235   15224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936.pem
	I1014 08:46:06.591189   15224 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 14 14:00 /usr/share/ca-certificates/936.pem
	I1014 08:46:06.591396   15224 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 14:00 /usr/share/ca-certificates/936.pem
	I1014 08:46:06.605730   15224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936.pem
	I1014 08:46:06.614146   15224 command_runner.go:130] > 51391683
	I1014 08:46:06.624214   15224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/936.pem /etc/ssl/certs/51391683.0"
	I1014 08:46:06.653769   15224 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 08:46:06.662087   15224 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 08:46:06.662180   15224 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1014 08:46:06.662180   15224 command_runner.go:130] > Device: 8,1	Inode: 5241127     Links: 1
	I1014 08:46:06.662180   15224 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1014 08:46:06.662180   15224 command_runner.go:130] > Access: 2024-10-14 15:22:28.423226154 +0000
	I1014 08:46:06.662180   15224 command_runner.go:130] > Modify: 2024-10-14 15:22:28.423226154 +0000
	I1014 08:46:06.662180   15224 command_runner.go:130] > Change: 2024-10-14 15:22:28.423226154 +0000
	I1014 08:46:06.662180   15224 command_runner.go:130] >  Birth: 2024-10-14 15:22:28.423226154 +0000
	I1014 08:46:06.673532   15224 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 08:46:06.685770   15224 command_runner.go:130] > Certificate will not expire
	I1014 08:46:06.696628   15224 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 08:46:06.705667   15224 command_runner.go:130] > Certificate will not expire
	I1014 08:46:06.716957   15224 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 08:46:06.727193   15224 command_runner.go:130] > Certificate will not expire
	I1014 08:46:06.740400   15224 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 08:46:06.750259   15224 command_runner.go:130] > Certificate will not expire
	I1014 08:46:06.762908   15224 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 08:46:06.773153   15224 command_runner.go:130] > Certificate will not expire
	I1014 08:46:06.782909   15224 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 08:46:06.792933   15224 command_runner.go:130] > Certificate will not expire
	I1014 08:46:06.793304   15224 kubeadm.go:392] StartCluster: {Name:multinode-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:multinode-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.106.123 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.109.137 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.102.29 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 08:46:06.802728   15224 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1014 08:46:06.838744   15224 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 08:46:06.861625   15224 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1014 08:46:06.861625   15224 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1014 08:46:06.861625   15224 command_runner.go:130] > /var/lib/minikube/etcd:
	I1014 08:46:06.861625   15224 command_runner.go:130] > member
	I1014 08:46:06.861766   15224 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 08:46:06.861766   15224 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 08:46:06.874073   15224 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 08:46:06.895878   15224 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 08:46:06.897075   15224 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-671000" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 08:46:06.897375   15224 kubeconfig.go:62] C:\Users\jenkins.minikube1\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-671000" cluster setting kubeconfig missing "multinode-671000" context setting]
	I1014 08:46:06.898017   15224 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 08:46:06.913804   15224 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 08:46:06.915160   15224 kapi.go:59] client config for multinode-671000: &rest.Config{Host:"https://172.20.106.123:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-671000/client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-671000/client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2926ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 08:46:06.916608   15224 cert_rotation.go:140] Starting client certificate rotation controller
	I1014 08:46:06.927856   15224 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 08:46:06.948546   15224 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I1014 08:46:06.948604   15224 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I1014 08:46:06.948604   15224 command_runner.go:130] > @@ -1,7 +1,7 @@
	I1014 08:46:06.948604   15224 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta4
	I1014 08:46:06.948604   15224 command_runner.go:130] >  kind: InitConfiguration
	I1014 08:46:06.948604   15224 command_runner.go:130] >  localAPIEndpoint:
	I1014 08:46:06.948604   15224 command_runner.go:130] > -  advertiseAddress: 172.20.100.167
	I1014 08:46:06.948604   15224 command_runner.go:130] > +  advertiseAddress: 172.20.106.123
	I1014 08:46:06.948604   15224 command_runner.go:130] >    bindPort: 8443
	I1014 08:46:06.948604   15224 command_runner.go:130] >  bootstrapTokens:
	I1014 08:46:06.948604   15224 command_runner.go:130] >    - groups:
	I1014 08:46:06.948604   15224 command_runner.go:130] > @@ -15,13 +15,13 @@
	I1014 08:46:06.948604   15224 command_runner.go:130] >    name: "multinode-671000"
	I1014 08:46:06.948604   15224 command_runner.go:130] >    kubeletExtraArgs:
	I1014 08:46:06.948604   15224 command_runner.go:130] >      - name: "node-ip"
	I1014 08:46:06.948604   15224 command_runner.go:130] > -      value: "172.20.100.167"
	I1014 08:46:06.948604   15224 command_runner.go:130] > +      value: "172.20.106.123"
	I1014 08:46:06.948604   15224 command_runner.go:130] >    taints: []
	I1014 08:46:06.948604   15224 command_runner.go:130] >  ---
	I1014 08:46:06.948604   15224 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta4
	I1014 08:46:06.948604   15224 command_runner.go:130] >  kind: ClusterConfiguration
	I1014 08:46:06.948604   15224 command_runner.go:130] >  apiServer:
	I1014 08:46:06.948604   15224 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.20.100.167"]
	I1014 08:46:06.948604   15224 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.20.106.123"]
	I1014 08:46:06.948604   15224 command_runner.go:130] >    extraArgs:
	I1014 08:46:06.948604   15224 command_runner.go:130] >      - name: "enable-admission-plugins"
	I1014 08:46:06.948604   15224 command_runner.go:130] >        value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I1014 08:46:06.948604   15224 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.20.100.167
	+  advertiseAddress: 172.20.106.123
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -15,13 +15,13 @@
	   name: "multinode-671000"
	   kubeletExtraArgs:
	     - name: "node-ip"
	-      value: "172.20.100.167"
	+      value: "172.20.106.123"
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.20.100.167"]
	+  certSANs: ["127.0.0.1", "localhost", "172.20.106.123"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	       value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	
	-- /stdout --
	I1014 08:46:06.948604   15224 kubeadm.go:1160] stopping kube-system containers ...
	I1014 08:46:06.957610   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1014 08:46:06.986322   15224 command_runner.go:130] > d9831e9f8ce8
	I1014 08:46:06.986322   15224 command_runner.go:130] > 3d8b7bae48a5
	I1014 08:46:06.986322   15224 command_runner.go:130] > 2f8cc9a218fe
	I1014 08:46:06.986322   15224 command_runner.go:130] > 1863de70f231
	I1014 08:46:06.986322   15224 command_runner.go:130] > fcdf89a3ac8c
	I1014 08:46:06.986322   15224 command_runner.go:130] > ea19428d7036
	I1014 08:46:06.986322   15224 command_runner.go:130] > 7144d8ce208c
	I1014 08:46:06.986322   15224 command_runner.go:130] > 5e48ddcfdf90
	I1014 08:46:06.986322   15224 command_runner.go:130] > 661e75bbf6b4
	I1014 08:46:06.986322   15224 command_runner.go:130] > 712aad669c9f
	I1014 08:46:06.986322   15224 command_runner.go:130] > 1ba3cd8bbd59
	I1014 08:46:06.986322   15224 command_runner.go:130] > 0b5a6e440d7b
	I1014 08:46:06.986322   15224 command_runner.go:130] > bfdde08319e3
	I1014 08:46:06.986322   15224 command_runner.go:130] > 2c6be2bd1889
	I1014 08:46:06.986322   15224 command_runner.go:130] > 2dc78387553f
	I1014 08:46:06.986322   15224 command_runner.go:130] > d5733d27d2f1
	I1014 08:46:06.986322   15224 docker.go:483] Stopping containers: [d9831e9f8ce8 3d8b7bae48a5 2f8cc9a218fe 1863de70f231 fcdf89a3ac8c ea19428d7036 7144d8ce208c 5e48ddcfdf90 661e75bbf6b4 712aad669c9f 1ba3cd8bbd59 0b5a6e440d7b bfdde08319e3 2c6be2bd1889 2dc78387553f d5733d27d2f1]
	I1014 08:46:06.996408   15224 ssh_runner.go:195] Run: docker stop d9831e9f8ce8 3d8b7bae48a5 2f8cc9a218fe 1863de70f231 fcdf89a3ac8c ea19428d7036 7144d8ce208c 5e48ddcfdf90 661e75bbf6b4 712aad669c9f 1ba3cd8bbd59 0b5a6e440d7b bfdde08319e3 2c6be2bd1889 2dc78387553f d5733d27d2f1
	I1014 08:46:07.026833   15224 command_runner.go:130] > d9831e9f8ce8
	I1014 08:46:07.026833   15224 command_runner.go:130] > 3d8b7bae48a5
	I1014 08:46:07.026833   15224 command_runner.go:130] > 2f8cc9a218fe
	I1014 08:46:07.026833   15224 command_runner.go:130] > 1863de70f231
	I1014 08:46:07.026833   15224 command_runner.go:130] > fcdf89a3ac8c
	I1014 08:46:07.026933   15224 command_runner.go:130] > ea19428d7036
	I1014 08:46:07.026933   15224 command_runner.go:130] > 7144d8ce208c
	I1014 08:46:07.026933   15224 command_runner.go:130] > 5e48ddcfdf90
	I1014 08:46:07.026933   15224 command_runner.go:130] > 661e75bbf6b4
	I1014 08:46:07.026933   15224 command_runner.go:130] > 712aad669c9f
	I1014 08:46:07.026933   15224 command_runner.go:130] > 1ba3cd8bbd59
	I1014 08:46:07.027021   15224 command_runner.go:130] > 0b5a6e440d7b
	I1014 08:46:07.027021   15224 command_runner.go:130] > bfdde08319e3
	I1014 08:46:07.027021   15224 command_runner.go:130] > 2c6be2bd1889
	I1014 08:46:07.027021   15224 command_runner.go:130] > 2dc78387553f
	I1014 08:46:07.027021   15224 command_runner.go:130] > d5733d27d2f1
	I1014 08:46:07.037793   15224 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 08:46:07.079785   15224 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 08:46:07.098779   15224 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1014 08:46:07.099497   15224 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1014 08:46:07.099497   15224 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1014 08:46:07.099497   15224 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 08:46:07.099729   15224 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 08:46:07.099790   15224 kubeadm.go:157] found existing configuration files:
	
	I1014 08:46:07.109597   15224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 08:46:07.130667   15224 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 08:46:07.130667   15224 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 08:46:07.141593   15224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 08:46:07.175279   15224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 08:46:07.193122   15224 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 08:46:07.193185   15224 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 08:46:07.203545   15224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 08:46:07.232530   15224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 08:46:07.251543   15224 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 08:46:07.252513   15224 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 08:46:07.268173   15224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 08:46:07.297176   15224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 08:46:07.315189   15224 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 08:46:07.315189   15224 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 08:46:07.325210   15224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 08:46:07.355828   15224 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 08:46:07.375817   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 08:46:07.643991   15224 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 08:46:07.644109   15224 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1014 08:46:07.644109   15224 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1014 08:46:07.644109   15224 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 08:46:07.644109   15224 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1014 08:46:07.644109   15224 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1014 08:46:07.644109   15224 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1014 08:46:07.644109   15224 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1014 08:46:07.644243   15224 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1014 08:46:07.644243   15224 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 08:46:07.644243   15224 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 08:46:07.644243   15224 command_runner.go:130] > [certs] Using the existing "sa" key
	I1014 08:46:07.644353   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 08:46:07.716463   15224 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 08:46:07.872494   15224 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 08:46:08.266961   15224 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 08:46:08.469570   15224 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 08:46:08.690796   15224 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 08:46:09.250375   15224 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 08:46:09.259445   15224 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.6150891s)
	I1014 08:46:09.259445   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 08:46:09.608189   15224 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 08:46:09.608251   15224 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 08:46:09.608320   15224 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1014 08:46:09.608320   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 08:46:09.698134   15224 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 08:46:09.698243   15224 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 08:46:09.698243   15224 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 08:46:09.698243   15224 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 08:46:09.698243   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 08:46:09.805890   15224 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 08:46:09.805965   15224 api_server.go:52] waiting for apiserver process to appear ...
	I1014 08:46:09.817282   15224 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 08:46:10.319264   15224 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 08:46:10.817293   15224 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 08:46:11.317276   15224 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 08:46:11.816884   15224 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 08:46:11.845322   15224 command_runner.go:130] > 1906
	I1014 08:46:11.845458   15224 api_server.go:72] duration metric: took 2.0394893s to wait for apiserver process to appear ...
	I1014 08:46:11.845458   15224 api_server.go:88] waiting for apiserver healthz status ...
	I1014 08:46:11.845527   15224 api_server.go:253] Checking apiserver healthz at https://172.20.106.123:8443/healthz ...
	I1014 08:46:15.106193   15224 api_server.go:279] https://172.20.106.123:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 08:46:15.106276   15224 api_server.go:103] status: https://172.20.106.123:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 08:46:15.106276   15224 api_server.go:253] Checking apiserver healthz at https://172.20.106.123:8443/healthz ...
	I1014 08:46:15.196155   15224 api_server.go:279] https://172.20.106.123:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 08:46:15.196224   15224 api_server.go:103] status: https://172.20.106.123:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 08:46:15.346360   15224 api_server.go:253] Checking apiserver healthz at https://172.20.106.123:8443/healthz ...
	I1014 08:46:15.353345   15224 api_server.go:279] https://172.20.106.123:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 08:46:15.353345   15224 api_server.go:103] status: https://172.20.106.123:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 08:46:15.845536   15224 api_server.go:253] Checking apiserver healthz at https://172.20.106.123:8443/healthz ...
	I1014 08:46:15.859623   15224 api_server.go:279] https://172.20.106.123:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 08:46:15.859997   15224 api_server.go:103] status: https://172.20.106.123:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 08:46:16.346035   15224 api_server.go:253] Checking apiserver healthz at https://172.20.106.123:8443/healthz ...
	I1014 08:46:16.357230   15224 api_server.go:279] https://172.20.106.123:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 08:46:16.357230   15224 api_server.go:103] status: https://172.20.106.123:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 08:46:16.846350   15224 api_server.go:253] Checking apiserver healthz at https://172.20.106.123:8443/healthz ...
	I1014 08:46:16.854581   15224 api_server.go:279] https://172.20.106.123:8443/healthz returned 200:
	ok
	I1014 08:46:16.855051   15224 round_trippers.go:463] GET https://172.20.106.123:8443/version
	I1014 08:46:16.855051   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:16.855051   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:16.855051   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:16.866797   15224 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1014 08:46:16.866797   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:16.866797   15224 round_trippers.go:580]     Content-Length: 263
	I1014 08:46:16.866797   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:16 GMT
	I1014 08:46:16.866797   15224 round_trippers.go:580]     Audit-Id: db161d46-6ae8-4777-adaa-6abd4fa6219b
	I1014 08:46:16.866797   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:16.866797   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:16.866797   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:16.866797   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:16.866797   15224 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1014 08:46:16.866951   15224 api_server.go:141] control plane version: v1.31.1
	I1014 08:46:16.866951   15224 api_server.go:131] duration metric: took 5.0214846s to wait for apiserver health ...
	I1014 08:46:16.866951   15224 cni.go:84] Creating CNI manager for ""
	I1014 08:46:16.866951   15224 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1014 08:46:16.869379   15224 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1014 08:46:16.884269   15224 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 08:46:16.893465   15224 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1014 08:46:16.893506   15224 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I1014 08:46:16.893536   15224 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I1014 08:46:16.893536   15224 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1014 08:46:16.893536   15224 command_runner.go:130] > Access: 2024-10-14 15:44:46.012884200 +0000
	I1014 08:46:16.893536   15224 command_runner.go:130] > Modify: 2024-10-08 16:10:48.000000000 +0000
	I1014 08:46:16.893536   15224 command_runner.go:130] > Change: 2024-10-14 08:44:37.118000000 +0000
	I1014 08:46:16.893536   15224 command_runner.go:130] >  Birth: -
	I1014 08:46:16.893536   15224 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1014 08:46:16.893536   15224 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1014 08:46:16.968682   15224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1014 08:46:18.237229   15224 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1014 08:46:18.237298   15224 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1014 08:46:18.237298   15224 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1014 08:46:18.237335   15224 command_runner.go:130] > daemonset.apps/kindnet configured
	I1014 08:46:18.237335   15224 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.2686513s)
	I1014 08:46:18.237467   15224 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 08:46:18.237500   15224 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 08:46:18.237500   15224 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 08:46:18.237500   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods
	I1014 08:46:18.237500   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:18.237500   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:18.237500   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:18.249884   15224 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1014 08:46:18.249884   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:18.249884   15224 round_trippers.go:580]     Audit-Id: 52d22011-ca0d-4991-a7fe-70d33b5c75f4
	I1014 08:46:18.249884   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:18.249884   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:18.249884   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:18.249884   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:18.249884   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:18 GMT
	I1014 08:46:18.251203   15224 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1858"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 91046 chars]
	I1014 08:46:18.258352   15224 system_pods.go:59] 12 kube-system pods found
	I1014 08:46:18.258352   15224 system_pods.go:61] "coredns-7c65d6cfc9-fs9ct" [fd736862-9e3e-4a3d-9a86-08efd2338477] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 08:46:18.258352   15224 system_pods.go:61] "etcd-multinode-671000" [098aece2-cb2c-470a-878a-872417e4387f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 08:46:18.258454   15224 system_pods.go:61] "kindnet-5rqxq" [480b1f88-eb32-4638-9834-2be17b8d35ed] Running
	I1014 08:46:18.258454   15224 system_pods.go:61] "kindnet-rgbjf" [445ff184-85e8-4153-a3d0-a0185c4f95de] Running
	I1014 08:46:18.258454   15224 system_pods.go:61] "kindnet-wqrx6" [a508bbf9-7565-4c73-98cb-e9684985c298] Running
	I1014 08:46:18.258454   15224 system_pods.go:61] "kube-apiserver-multinode-671000" [64595feb-e6e8-4e69-a4b7-6459d15e3beb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 08:46:18.258527   15224 system_pods.go:61] "kube-controller-manager-multinode-671000" [a5c7bb80-c844-476f-ba47-1cd4e599b92d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 08:46:18.258527   15224 system_pods.go:61] "kube-proxy-kbpjf" [004b7f38-fa3b-4c2c-9524-8d5b1ba514e9] Running
	I1014 08:46:18.258527   15224 system_pods.go:61] "kube-proxy-n6txs" [796a44f9-2067-438d-9359-34d5f968c861] Running
	I1014 08:46:18.258527   15224 system_pods.go:61] "kube-proxy-r74dx" [f8d14473-8859-4015-84e9-d00656cc00c9] Running
	I1014 08:46:18.258527   15224 system_pods.go:61] "kube-scheduler-multinode-671000" [97febcab-f54d-4338-ba7c-2dc5e69b77fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 08:46:18.258527   15224 system_pods.go:61] "storage-provisioner" [fde8ff75-bc7f-4db4-b098-c3a08b38d205] Running
	I1014 08:46:18.258527   15224 system_pods.go:74] duration metric: took 21.0609ms to wait for pod list to return data ...
	I1014 08:46:18.258527   15224 node_conditions.go:102] verifying NodePressure condition ...
	I1014 08:46:18.258527   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes
	I1014 08:46:18.258527   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:18.258527   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:18.258527   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:18.346618   15224 round_trippers.go:574] Response Status: 200 OK in 88 milliseconds
	I1014 08:46:18.346716   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:18.346766   15224 round_trippers.go:580]     Audit-Id: b2743d1a-1144-484b-bf9a-6b50e65fcd86
	I1014 08:46:18.346766   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:18.346766   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:18.346766   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:18.346766   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:18.346766   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:18 GMT
	I1014 08:46:18.347019   15224 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1866"},"items":[{"metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1816","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16290 chars]
	I1014 08:46:18.349119   15224 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 08:46:18.349204   15224 node_conditions.go:123] node cpu capacity is 2
	I1014 08:46:18.349243   15224 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 08:46:18.349243   15224 node_conditions.go:123] node cpu capacity is 2
	I1014 08:46:18.349243   15224 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 08:46:18.349243   15224 node_conditions.go:123] node cpu capacity is 2
	I1014 08:46:18.349287   15224 node_conditions.go:105] duration metric: took 90.7154ms to run NodePressure ...
	I1014 08:46:18.349328   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 08:46:19.045852   15224 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1014 08:46:19.045882   15224 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1014 08:46:19.045954   15224 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1014 08:46:19.046204   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I1014 08:46:19.046228   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:19.046228   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:19.046266   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:19.056097   15224 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1014 08:46:19.056170   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:19.056170   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:19.056170   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:19.056170   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:19 GMT
	I1014 08:46:19.056170   15224 round_trippers.go:580]     Audit-Id: 0afa301e-6abd-47f5-b7b7-da29b01e34e8
	I1014 08:46:19.056170   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:19.056170   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:19.057046   15224 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1915"},"items":[{"metadata":{"name":"etcd-multinode-671000","namespace":"kube-system","uid":"098aece2-cb2c-470a-878a-872417e4387f","resourceVersion":"1852","creationTimestamp":"2024-10-14T15:46:17Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.106.123:2379","kubernetes.io/config.hash":"679486a8f27de5805bc2e87fb1920dce","kubernetes.io/config.mirror":"679486a8f27de5805bc2e87fb1920dce","kubernetes.io/config.seen":"2024-10-14T15:46:09.843414705Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 31353 chars]
	I1014 08:46:19.058480   15224 kubeadm.go:739] kubelet initialised
	I1014 08:46:19.058480   15224 kubeadm.go:740] duration metric: took 12.5255ms waiting for restarted kubelet to initialise ...
	I1014 08:46:19.058480   15224 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 08:46:19.058480   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods
	I1014 08:46:19.058480   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:19.058480   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:19.058480   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:19.065139   15224 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:46:19.065324   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:19.065479   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:19.065497   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:19.065497   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:19.065532   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:19.065532   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:19 GMT
	I1014 08:46:19.065532   15224 round_trippers.go:580]     Audit-Id: ef054b89-30ee-4760-a876-0f8d7ea29aef
	I1014 08:46:19.066752   15224 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1915"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 91046 chars]
	I1014 08:46:19.070969   15224 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace to be "Ready" ...
	I1014 08:46:19.071629   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:46:19.071629   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:19.071629   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:19.071721   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:19.074421   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:46:19.074421   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:19.074421   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:19.074421   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:19.074421   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:19 GMT
	I1014 08:46:19.074421   15224 round_trippers.go:580]     Audit-Id: 3832a640-eb73-40a8-a3ee-e7e00c00cd72
	I1014 08:46:19.074421   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:19.074421   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:19.074421   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:46:19.076009   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:19.076104   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:19.076104   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:19.076104   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:19.078333   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:46:19.078727   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:19.078727   15224 round_trippers.go:580]     Audit-Id: 9544093b-6526-4687-bb61-322267e43d93
	I1014 08:46:19.078727   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:19.078727   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:19.078727   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:19.078727   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:19.078727   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:19 GMT
	I1014 08:46:19.079153   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:19.079770   15224 pod_ready.go:98] node "multinode-671000" hosting pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000" has status "Ready":"False"
	I1014 08:46:19.079872   15224 pod_ready.go:82] duration metric: took 8.8343ms for pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace to be "Ready" ...
	E1014 08:46:19.079872   15224 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-671000" hosting pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000" has status "Ready":"False"
	I1014 08:46:19.079906   15224 pod_ready.go:79] waiting up to 4m0s for pod "etcd-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:46:19.080037   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-671000
	I1014 08:46:19.080086   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:19.080086   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:19.080131   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:19.083388   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:46:19.083388   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:19.083388   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:19 GMT
	I1014 08:46:19.083388   15224 round_trippers.go:580]     Audit-Id: fe412dc9-9d4e-48e0-9c15-f94fe77520dd
	I1014 08:46:19.083388   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:19.083388   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:19.083388   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:19.083388   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:19.083388   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-671000","namespace":"kube-system","uid":"098aece2-cb2c-470a-878a-872417e4387f","resourceVersion":"1852","creationTimestamp":"2024-10-14T15:46:17Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.106.123:2379","kubernetes.io/config.hash":"679486a8f27de5805bc2e87fb1920dce","kubernetes.io/config.mirror":"679486a8f27de5805bc2e87fb1920dce","kubernetes.io/config.seen":"2024-10-14T15:46:09.843414705Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6841 chars]
	I1014 08:46:19.084815   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:19.084892   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:19.084892   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:19.084892   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:19.087129   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:46:19.087129   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:19.087129   15224 round_trippers.go:580]     Audit-Id: 7bbf2564-bdde-4fe5-8eab-179b929f9aec
	I1014 08:46:19.087129   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:19.087129   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:19.087129   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:19.087129   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:19.088140   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:19 GMT
	I1014 08:46:19.088446   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:19.089051   15224 pod_ready.go:98] node "multinode-671000" hosting pod "etcd-multinode-671000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000" has status "Ready":"False"
	I1014 08:46:19.089129   15224 pod_ready.go:82] duration metric: took 9.2232ms for pod "etcd-multinode-671000" in "kube-system" namespace to be "Ready" ...
	E1014 08:46:19.089129   15224 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-671000" hosting pod "etcd-multinode-671000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000" has status "Ready":"False"
	I1014 08:46:19.089129   15224 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:46:19.089215   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-671000
	I1014 08:46:19.089272   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:19.089314   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:19.089314   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:19.092214   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:46:19.092538   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:19.092538   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:19 GMT
	I1014 08:46:19.092538   15224 round_trippers.go:580]     Audit-Id: 96221490-98ce-4d13-b34c-1c50eb001ae3
	I1014 08:46:19.092538   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:19.092538   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:19.092538   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:19.092538   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:19.092768   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-671000","namespace":"kube-system","uid":"64595feb-e6e8-4e69-a4b7-6459d15e3beb","resourceVersion":"1823","creationTimestamp":"2024-10-14T15:46:15Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.106.123:8443","kubernetes.io/config.hash":"864b69f35cb25e9dd5d87a753a055a10","kubernetes.io/config.mirror":"864b69f35cb25e9dd5d87a753a055a10","kubernetes.io/config.seen":"2024-10-14T15:46:09.765946769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:46:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8293 chars]
	I1014 08:46:19.093732   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:19.093788   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:19.093788   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:19.093848   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:19.103021   15224 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1014 08:46:19.103021   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:19.103021   15224 round_trippers.go:580]     Audit-Id: 307ab35a-40bf-407e-8717-b78948461267
	I1014 08:46:19.103021   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:19.103021   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:19.103021   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:19.103021   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:19.103021   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:19 GMT
	I1014 08:46:19.103021   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:19.103911   15224 pod_ready.go:98] node "multinode-671000" hosting pod "kube-apiserver-multinode-671000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000" has status "Ready":"False"
	I1014 08:46:19.104117   15224 pod_ready.go:82] duration metric: took 14.9876ms for pod "kube-apiserver-multinode-671000" in "kube-system" namespace to be "Ready" ...
	E1014 08:46:19.104117   15224 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-671000" hosting pod "kube-apiserver-multinode-671000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000" has status "Ready":"False"
	I1014 08:46:19.104213   15224 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:46:19.104304   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-671000
	I1014 08:46:19.104304   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:19.104304   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:19.104304   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:19.106845   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:46:19.106845   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:19.106845   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:19.107248   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:19.107248   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:19.107248   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:19 GMT
	I1014 08:46:19.107248   15224 round_trippers.go:580]     Audit-Id: eb45ac40-5b1e-4f52-b530-f756b4823b45
	I1014 08:46:19.107248   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:19.107351   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-671000","namespace":"kube-system","uid":"a5c7bb80-c844-476f-ba47-1cd4e599b92d","resourceVersion":"1821","creationTimestamp":"2024-10-14T15:22:39Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b83e31864e0ae98d29d960866012ecb0","kubernetes.io/config.mirror":"b83e31864e0ae98d29d960866012ecb0","kubernetes.io/config.seen":"2024-10-14T15:22:39.775213119Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I1014 08:46:19.107929   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:19.107929   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:19.107929   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:19.108102   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:19.109819   15224 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1014 08:46:19.109819   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:19.109819   15224 round_trippers.go:580]     Audit-Id: 0fa139c8-1acb-45ed-a3ec-61712883e1c2
	I1014 08:46:19.109819   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:19.109819   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:19.109819   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:19.109819   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:19.109819   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:19 GMT
	I1014 08:46:19.110444   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:19.111112   15224 pod_ready.go:98] node "multinode-671000" hosting pod "kube-controller-manager-multinode-671000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000" has status "Ready":"False"
	I1014 08:46:19.111112   15224 pod_ready.go:82] duration metric: took 6.8981ms for pod "kube-controller-manager-multinode-671000" in "kube-system" namespace to be "Ready" ...
	E1014 08:46:19.111112   15224 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-671000" hosting pod "kube-controller-manager-multinode-671000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000" has status "Ready":"False"
	I1014 08:46:19.111112   15224 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kbpjf" in "kube-system" namespace to be "Ready" ...
	I1014 08:46:19.246940   15224 request.go:632] Waited for 135.8283ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kbpjf
	I1014 08:46:19.246940   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kbpjf
	I1014 08:46:19.246940   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:19.246940   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:19.246940   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:19.252552   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:19.252552   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:19.252552   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:19.252552   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:19.252552   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:19.252552   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:19 GMT
	I1014 08:46:19.252552   15224 round_trippers.go:580]     Audit-Id: f3866a4a-4654-44ea-9a3c-a727cefd5824
	I1014 08:46:19.252552   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:19.252552   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kbpjf","generateName":"kube-proxy-","namespace":"kube-system","uid":"004b7f38-fa3b-4c2c-9524-8d5b1ba514e9","resourceVersion":"1803","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e5217416-1ed2-4ea5-ae9b-ee7bcdba0d2a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e5217416-1ed2-4ea5-ae9b-ee7bcdba0d2a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6433 chars]
	I1014 08:46:19.446206   15224 request.go:632] Waited for 192.3619ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.106.123:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:46:19.446206   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:46:19.446206   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:19.446206   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:19.446206   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:19.450219   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:19.450219   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:19.450219   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:19.450219   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:19.450219   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:19 GMT
	I1014 08:46:19.450312   15224 round_trippers.go:580]     Audit-Id: 187a228c-30d5-43ec-a369-8f77969b7532
	I1014 08:46:19.450312   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:19.450312   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:19.450437   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"1802","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4487 chars]
	I1014 08:46:19.451051   15224 pod_ready.go:98] node "multinode-671000-m02" hosting pod "kube-proxy-kbpjf" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000-m02" has status "Ready":"Unknown"
	I1014 08:46:19.451256   15224 pod_ready.go:82] duration metric: took 339.9386ms for pod "kube-proxy-kbpjf" in "kube-system" namespace to be "Ready" ...
	E1014 08:46:19.451278   15224 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-671000-m02" hosting pod "kube-proxy-kbpjf" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000-m02" has status "Ready":"Unknown"
	I1014 08:46:19.451278   15224 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-n6txs" in "kube-system" namespace to be "Ready" ...
	I1014 08:46:19.646165   15224 request.go:632] Waited for 194.7864ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n6txs
	I1014 08:46:19.646165   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n6txs
	I1014 08:46:19.646165   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:19.646165   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:19.646165   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:19.649574   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:46:19.649574   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:19.649574   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:19.649574   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:19.650596   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:19.650596   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:19.650623   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:19 GMT
	I1014 08:46:19.650623   15224 round_trippers.go:580]     Audit-Id: 748ea5df-9734-42af-840e-3ee07707fa9b
	I1014 08:46:19.651257   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-n6txs","generateName":"kube-proxy-","namespace":"kube-system","uid":"796a44f9-2067-438d-9359-34d5f968c861","resourceVersion":"1784","creationTimestamp":"2024-10-14T15:30:35Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e5217416-1ed2-4ea5-ae9b-ee7bcdba0d2a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:30:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e5217416-1ed2-4ea5-ae9b-ee7bcdba0d2a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6428 chars]
	I1014 08:46:19.846137   15224 request.go:632] Waited for 194.6268ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.106.123:8443/api/v1/nodes/multinode-671000-m03
	I1014 08:46:19.846137   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000-m03
	I1014 08:46:19.846137   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:19.846137   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:19.846137   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:19.851717   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:19.851717   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:19.851717   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:19 GMT
	I1014 08:46:19.851717   15224 round_trippers.go:580]     Audit-Id: d6dc5256-6236-4401-91fd-3938710e1e67
	I1014 08:46:19.851717   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:19.851717   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:19.851717   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:19.851717   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:19.851717   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m03","uid":"a7ea02fb-ac24-4430-adbc-9815c644cfa0","resourceVersion":"1897","creationTimestamp":"2024-10-14T15:41:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_41_35_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:41:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4303 chars]
	I1014 08:46:19.852403   15224 pod_ready.go:98] node "multinode-671000-m03" hosting pod "kube-proxy-n6txs" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000-m03" has status "Ready":"Unknown"
	I1014 08:46:19.852403   15224 pod_ready.go:82] duration metric: took 401.1243ms for pod "kube-proxy-n6txs" in "kube-system" namespace to be "Ready" ...
	E1014 08:46:19.852403   15224 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-671000-m03" hosting pod "kube-proxy-n6txs" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000-m03" has status "Ready":"Unknown"
	I1014 08:46:19.852403   15224 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-r74dx" in "kube-system" namespace to be "Ready" ...
	I1014 08:46:20.047159   15224 request.go:632] Waited for 194.7552ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r74dx
	I1014 08:46:20.047159   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r74dx
	I1014 08:46:20.047159   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:20.047159   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:20.047159   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:20.051858   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:20.051858   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:20.051858   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:20 GMT
	I1014 08:46:20.051858   15224 round_trippers.go:580]     Audit-Id: 716331ce-6faf-4057-94da-86ade670c50e
	I1014 08:46:20.051858   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:20.051858   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:20.051858   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:20.051858   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:20.051858   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-r74dx","generateName":"kube-proxy-","namespace":"kube-system","uid":"f8d14473-8859-4015-84e9-d00656cc00c9","resourceVersion":"1856","creationTimestamp":"2024-10-14T15:22:44Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e5217416-1ed2-4ea5-ae9b-ee7bcdba0d2a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e5217416-1ed2-4ea5-ae9b-ee7bcdba0d2a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6405 chars]
	I1014 08:46:20.247011   15224 request.go:632] Waited for 193.9468ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:20.247011   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:20.247011   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:20.247011   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:20.247011   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:20.252392   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:20.252392   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:20.252392   15224 round_trippers.go:580]     Audit-Id: e82a2ff6-4c6b-41ff-bfd6-29d0fcd979b0
	I1014 08:46:20.252498   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:20.252498   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:20.252498   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:20.252498   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:20.252498   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:20 GMT
	I1014 08:46:20.252842   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:20.253523   15224 pod_ready.go:98] node "multinode-671000" hosting pod "kube-proxy-r74dx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000" has status "Ready":"False"
	I1014 08:46:20.253523   15224 pod_ready.go:82] duration metric: took 401.1194ms for pod "kube-proxy-r74dx" in "kube-system" namespace to be "Ready" ...
	E1014 08:46:20.253523   15224 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-671000" hosting pod "kube-proxy-r74dx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000" has status "Ready":"False"
	I1014 08:46:20.253523   15224 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:46:20.446344   15224 request.go:632] Waited for 192.8202ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-671000
	I1014 08:46:20.446344   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-671000
	I1014 08:46:20.446344   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:20.446344   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:20.446344   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:20.452363   15224 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:46:20.452532   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:20.452532   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:20.452532   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:20.452532   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:20.452532   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:20.452532   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:20 GMT
	I1014 08:46:20.452532   15224 round_trippers.go:580]     Audit-Id: a9a397a9-37dd-472d-87e8-017d88052826
	I1014 08:46:20.452912   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-671000","namespace":"kube-system","uid":"97febcab-f54d-4338-ba7c-2dc5e69b77fc","resourceVersion":"1819","creationTimestamp":"2024-10-14T15:22:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3e987cfaedc75c39145e8fc131c60c81","kubernetes.io/config.mirror":"3e987cfaedc75c39145e8fc131c60c81","kubernetes.io/config.seen":"2024-10-14T15:22:32.104995089Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5449 chars]
	I1014 08:46:20.646876   15224 request.go:632] Waited for 193.3118ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:20.647345   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:20.647345   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:20.647345   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:20.647345   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:20.651509   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:20.651509   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:20.651509   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:20 GMT
	I1014 08:46:20.651509   15224 round_trippers.go:580]     Audit-Id: 88263073-d4b3-499e-a6e0-046a8c95d6d3
	I1014 08:46:20.651509   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:20.651509   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:20.651509   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:20.651509   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:20.651509   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:20.652591   15224 pod_ready.go:98] node "multinode-671000" hosting pod "kube-scheduler-multinode-671000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000" has status "Ready":"False"
	I1014 08:46:20.652591   15224 pod_ready.go:82] duration metric: took 399.0672ms for pod "kube-scheduler-multinode-671000" in "kube-system" namespace to be "Ready" ...
	E1014 08:46:20.652591   15224 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-671000" hosting pod "kube-scheduler-multinode-671000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000" has status "Ready":"False"
	I1014 08:46:20.652591   15224 pod_ready.go:39] duration metric: took 1.5941087s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 08:46:20.652699   15224 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 08:46:20.672053   15224 command_runner.go:130] > -16
	I1014 08:46:20.672053   15224 ops.go:34] apiserver oom_adj: -16
	I1014 08:46:20.672053   15224 kubeadm.go:597] duration metric: took 13.8102629s to restartPrimaryControlPlane
	I1014 08:46:20.672053   15224 kubeadm.go:394] duration metric: took 13.8787245s to StartCluster
	I1014 08:46:20.672053   15224 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 08:46:20.672654   15224 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 08:46:20.674368   15224 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 08:46:20.676008   15224 start.go:235] Will wait 6m0s for node &{Name: IP:172.20.106.123 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 08:46:20.676008   15224 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 08:46:20.676008   15224 config.go:182] Loaded profile config "multinode-671000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 08:46:20.680839   15224 out.go:177] * Verifying Kubernetes components...
	I1014 08:46:20.684611   15224 out.go:177] * Enabled addons: 
	I1014 08:46:20.689229   15224 addons.go:510] duration metric: took 13.2209ms for enable addons: enabled=[]
	I1014 08:46:20.696921   15224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:46:20.962750   15224 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 08:46:20.989170   15224 node_ready.go:35] waiting up to 6m0s for node "multinode-671000" to be "Ready" ...
	I1014 08:46:20.989170   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:20.989170   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:20.989170   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:20.989170   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:20.993850   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:20.993920   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:20.993920   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:20.993920   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:20.993920   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:20.993920   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:20.993920   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:20 GMT
	I1014 08:46:20.994009   15224 round_trippers.go:580]     Audit-Id: 7464651d-7d50-4a2f-bf97-57247a07d5fc
	I1014 08:46:20.995204   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:21.490070   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:21.490070   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:21.490070   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:21.490070   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:21.495003   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:21.495114   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:21.495114   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:21 GMT
	I1014 08:46:21.495196   15224 round_trippers.go:580]     Audit-Id: 69fcf899-fb08-436b-b860-9d7bf5403e18
	I1014 08:46:21.495263   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:21.495263   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:21.495263   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:21.495263   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:21.495492   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:21.989789   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:21.989856   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:21.989856   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:21.989856   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:21.994919   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:21.994919   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:21.995025   15224 round_trippers.go:580]     Audit-Id: 4591f3ee-302b-4b1d-bc3b-8f40dd26e8d1
	I1014 08:46:21.995025   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:21.995025   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:21.995025   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:21.995025   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:21.995025   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:21 GMT
	I1014 08:46:21.995122   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:22.489573   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:22.489573   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:22.489573   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:22.489573   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:22.494198   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:22.494867   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:22.494867   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:22.494867   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:22.494867   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:22 GMT
	I1014 08:46:22.494867   15224 round_trippers.go:580]     Audit-Id: f3460bd5-b9fa-4bc5-98f4-8bbd9559aedf
	I1014 08:46:22.494867   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:22.494867   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:22.495223   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:22.989693   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:22.989693   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:22.989693   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:22.989693   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:23.000733   15224 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1014 08:46:23.000733   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:23.000733   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:23.000733   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:23.000733   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:23 GMT
	I1014 08:46:23.000733   15224 round_trippers.go:580]     Audit-Id: 37ba7b53-0164-4e4a-92fc-d738109fbe97
	I1014 08:46:23.000733   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:23.000733   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:23.000733   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:23.001726   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:23.489664   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:23.489664   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:23.489664   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:23.489664   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:23.493925   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:23.493925   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:23.493925   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:23.494022   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:23.494022   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:23.494022   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:23.494022   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:23 GMT
	I1014 08:46:23.494022   15224 round_trippers.go:580]     Audit-Id: fbe77ef9-9725-42c9-9a43-fe0648d2785b
	I1014 08:46:23.494315   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:23.989306   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:23.989306   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:23.989306   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:23.989306   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:23.994245   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:23.994329   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:23.994329   15224 round_trippers.go:580]     Audit-Id: a8753d30-2702-4929-bde5-81de62393e5b
	I1014 08:46:23.994329   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:23.994329   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:23.994329   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:23.994329   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:23.994329   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:23 GMT
	I1014 08:46:23.994728   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:24.496715   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:24.496715   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:24.496841   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:24.496841   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:24.500912   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:24.501023   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:24.501023   15224 round_trippers.go:580]     Audit-Id: e052191c-bf0d-4f02-af7b-c2736a935942
	I1014 08:46:24.501023   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:24.501023   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:24.501023   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:24.501023   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:24.501023   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:24 GMT
	I1014 08:46:24.501367   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:24.989449   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:24.989449   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:24.989449   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:24.989449   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:24.993681   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:24.993681   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:24.993681   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:24.993681   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:24.993681   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:24 GMT
	I1014 08:46:24.993851   15224 round_trippers.go:580]     Audit-Id: 9d37bd0a-7988-43a3-aa0b-159b6a7eec19
	I1014 08:46:24.993851   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:24.993851   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:24.994137   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:25.489868   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:25.489943   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:25.489943   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:25.489943   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:25.496479   15224 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:46:25.496479   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:25.496479   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:25.496479   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:25.496479   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:25.496479   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:25 GMT
	I1014 08:46:25.496479   15224 round_trippers.go:580]     Audit-Id: e1f097b2-0a02-4c90-bd33-f95a4c1b08bd
	I1014 08:46:25.496479   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:25.496479   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:25.497321   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:25.990029   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:25.990029   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:25.990029   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:25.990029   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:25.994352   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:25.994796   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:25.994796   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:25.994796   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:25 GMT
	I1014 08:46:25.994796   15224 round_trippers.go:580]     Audit-Id: 90663653-f659-418b-8bc5-ac54bbaab39f
	I1014 08:46:25.994796   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:25.994796   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:25.994796   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:25.995247   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:26.489549   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:26.489549   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:26.490181   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:26.490181   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:26.495265   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:26.495331   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:26.495331   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:26.495408   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:26.495408   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:26.495408   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:26.495408   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:26 GMT
	I1014 08:46:26.495408   15224 round_trippers.go:580]     Audit-Id: 25742e4f-471f-40b2-834c-a84f8f670590
	I1014 08:46:26.495610   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:26.989919   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:26.990457   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:26.990457   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:26.990457   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:27.005402   15224 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1014 08:46:27.005402   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:27.005402   15224 round_trippers.go:580]     Audit-Id: 6457596b-92b6-46c1-b4a5-c5635f465c51
	I1014 08:46:27.005402   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:27.005402   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:27.005402   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:27.005402   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:27.005402   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:27 GMT
	I1014 08:46:27.005402   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:27.490507   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:27.490600   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:27.490600   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:27.490600   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:27.494888   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:27.494962   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:27.494962   15224 round_trippers.go:580]     Audit-Id: ddfa7714-e88c-48c2-8ff7-53c248cddda8
	I1014 08:46:27.495019   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:27.495019   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:27.495019   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:27.495019   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:27.495019   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:27 GMT
	I1014 08:46:27.495083   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:27.990174   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:27.990174   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:27.990174   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:27.990174   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:27.995608   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:27.995684   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:27.995684   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:27.995765   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:27 GMT
	I1014 08:46:27.995765   15224 round_trippers.go:580]     Audit-Id: 01f27a49-39f4-46da-9a7e-28bcfb69916a
	I1014 08:46:27.995765   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:27.995765   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:27.995765   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:27.995903   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:27.996877   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:28.490581   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:28.490656   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:28.490656   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:28.490656   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:28.495236   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:28.495301   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:28.495301   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:28.495301   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:28.495301   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:28.495301   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:28 GMT
	I1014 08:46:28.495301   15224 round_trippers.go:580]     Audit-Id: 967a4b9d-0ad3-46a9-b21f-72fae183488c
	I1014 08:46:28.495301   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:28.495730   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:28.989660   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:28.989660   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:28.989660   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:28.989660   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:28.993840   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:28.994392   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:28.994392   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:28 GMT
	I1014 08:46:28.994392   15224 round_trippers.go:580]     Audit-Id: 4f8d1207-1ceb-4deb-89d7-efb8f832d8d0
	I1014 08:46:28.994392   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:28.994392   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:28.994392   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:28.994392   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:28.994826   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:29.489481   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:29.489481   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:29.490118   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:29.490118   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:29.494219   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:29.494291   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:29.494291   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:29.494291   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:29.494291   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:29.494291   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:29 GMT
	I1014 08:46:29.494291   15224 round_trippers.go:580]     Audit-Id: ba854cb9-1628-4230-88e6-6b29de214981
	I1014 08:46:29.494291   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:29.494291   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:29.989475   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:29.989475   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:29.989475   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:29.989475   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:29.993484   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:29.993484   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:29.993484   15224 round_trippers.go:580]     Audit-Id: ba30fc62-10d7-49ed-9b11-72d825c5536a
	I1014 08:46:29.993484   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:29.993484   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:29.993484   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:29.993484   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:29.993484   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:29 GMT
	I1014 08:46:29.993484   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:30.489983   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:30.489983   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:30.489983   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:30.489983   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:30.494949   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:30.495054   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:30.495054   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:30.495054   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:30.495054   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:30.495054   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:30 GMT
	I1014 08:46:30.495054   15224 round_trippers.go:580]     Audit-Id: 281929cf-ac7a-428b-b06b-18a2823ea343
	I1014 08:46:30.495054   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:30.495378   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:30.496048   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:30.990110   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:30.990110   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:30.990110   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:30.990110   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:30.995119   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:30.995119   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:30.995119   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:30.995119   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:30.995119   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:30 GMT
	I1014 08:46:30.995119   15224 round_trippers.go:580]     Audit-Id: 839313f2-a3a3-41c1-a9a3-b0cfbe670573
	I1014 08:46:30.995119   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:30.995119   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:30.995119   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:31.490036   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:31.490036   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:31.490036   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:31.490036   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:31.495114   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:31.495202   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:31.495202   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:31.495202   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:31.495202   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:31 GMT
	I1014 08:46:31.495278   15224 round_trippers.go:580]     Audit-Id: 731a244d-c39d-4c3a-8ca1-2ad9cebe906d
	I1014 08:46:31.495278   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:31.495278   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:31.496329   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:31.989401   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:31.989401   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:31.989401   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:31.989401   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:31.994608   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:31.994684   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:31.994778   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:31 GMT
	I1014 08:46:31.994778   15224 round_trippers.go:580]     Audit-Id: 1c2c5522-6771-4faa-abac-381f1772deb5
	I1014 08:46:31.994778   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:31.994778   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:31.994778   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:31.994778   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:31.995322   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:32.489923   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:32.489923   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:32.489923   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:32.489923   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:32.495311   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:32.495404   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:32.495404   15224 round_trippers.go:580]     Audit-Id: 8a13ff28-51ab-47d3-a487-c7067b004aaa
	I1014 08:46:32.495404   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:32.495404   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:32.495404   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:32.495404   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:32.495404   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:32 GMT
	I1014 08:46:32.495662   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:32.989950   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:32.989950   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:32.989950   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:32.989950   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:32.994813   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:32.994945   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:32.995010   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:32.995010   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:32 GMT
	I1014 08:46:32.995010   15224 round_trippers.go:580]     Audit-Id: a807225b-7234-4720-817c-dd74eaf7bb3d
	I1014 08:46:32.995010   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:32.995010   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:32.995010   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:32.995010   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:32.995930   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:33.490366   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:33.490366   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:33.490366   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:33.490366   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:33.494840   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:33.494965   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:33.494965   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:33 GMT
	I1014 08:46:33.494965   15224 round_trippers.go:580]     Audit-Id: c29c4850-ebb6-4705-83ec-0b0483df99f2
	I1014 08:46:33.494965   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:33.494965   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:33.494965   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:33.494965   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:33.495156   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:33.989373   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:33.989373   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:33.989373   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:33.989373   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:33.994100   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:33.994171   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:33.994171   15224 round_trippers.go:580]     Audit-Id: 009cd9f4-48dd-48a4-994f-9f2bf54e56aa
	I1014 08:46:33.994231   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:33.994231   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:33.994231   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:33.994231   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:33.994231   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:33 GMT
	I1014 08:46:33.994750   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:34.489959   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:34.489959   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:34.489959   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:34.489959   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:34.494602   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:34.494602   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:34.494602   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:34.494742   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:34.494742   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:34.494742   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:34.494742   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:34 GMT
	I1014 08:46:34.494742   15224 round_trippers.go:580]     Audit-Id: b5628196-9f24-46db-9e1b-76596ab7641f
	I1014 08:46:34.495174   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:34.989658   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:34.989658   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:34.989658   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:34.989658   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:34.995512   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:34.995512   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:34.995512   15224 round_trippers.go:580]     Audit-Id: 21863c3c-5205-4b95-bc17-463765c6acbd
	I1014 08:46:34.995512   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:34.995650   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:34.995650   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:34.995650   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:34.995650   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:34 GMT
	I1014 08:46:34.996270   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:34.996831   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:35.489329   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:35.489329   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:35.489329   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:35.489329   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:35.493961   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:35.493961   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:35.494027   15224 round_trippers.go:580]     Audit-Id: 67262c8a-545e-48ca-ab0e-016585502540
	I1014 08:46:35.494027   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:35.494027   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:35.494027   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:35.494027   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:35.494027   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:35 GMT
	I1014 08:46:35.494027   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:35.989339   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:35.989339   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:35.989339   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:35.989339   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:35.993558   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:35.993558   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:35.993558   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:35.993558   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:35.993755   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:35 GMT
	I1014 08:46:35.993755   15224 round_trippers.go:580]     Audit-Id: b2c42c30-190f-437f-99bc-b7442cab2daf
	I1014 08:46:35.993755   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:35.993755   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:35.994348   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:36.489305   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:36.489305   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:36.489305   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:36.489305   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:36.494871   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:36.494871   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:36.494871   15224 round_trippers.go:580]     Audit-Id: 7b7650a4-e14c-42a5-8351-49c336ef59a2
	I1014 08:46:36.495413   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:36.495413   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:36.495413   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:36.495413   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:36.495413   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:36 GMT
	I1014 08:46:36.495639   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:36.989664   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:36.989664   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:36.989664   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:36.989664   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:36.995052   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:36.995052   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:36.995131   15224 round_trippers.go:580]     Audit-Id: d131c52b-39ea-4d2c-a158-bc5b31a61e5d
	I1014 08:46:36.995131   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:36.995131   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:36.995131   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:36.995131   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:36.995131   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:36 GMT
	I1014 08:46:36.995553   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:37.490073   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:37.490183   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:37.490183   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:37.490183   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:37.495848   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:37.495944   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:37.495944   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:37.495944   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:37.495944   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:37 GMT
	I1014 08:46:37.495944   15224 round_trippers.go:580]     Audit-Id: b64a4090-20c4-4569-b622-4e31f5e9097c
	I1014 08:46:37.496017   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:37.496017   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:37.496324   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:37.497030   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:37.989954   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:37.989954   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:37.989954   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:37.989954   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:37.994239   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:37.994372   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:37.994372   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:37.994372   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:37.994372   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:37.994372   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:37.994372   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:37 GMT
	I1014 08:46:37.994372   15224 round_trippers.go:580]     Audit-Id: 977cae36-9de4-4e41-ae2f-047dc5d41284
	I1014 08:46:37.994768   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:38.489778   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:38.489778   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:38.489778   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:38.489778   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:38.495123   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:38.495218   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:38.495218   15224 round_trippers.go:580]     Audit-Id: d572e744-f4e9-4bef-b476-957159d67e33
	I1014 08:46:38.495218   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:38.495218   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:38.495218   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:38.495218   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:38.495218   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:38 GMT
	I1014 08:46:38.495471   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:38.989850   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:38.989850   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:38.989850   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:38.989850   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:38.993902   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:38.993993   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:38.993993   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:38.993993   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:38.993993   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:38.993993   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:38.994073   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:38 GMT
	I1014 08:46:38.994073   15224 round_trippers.go:580]     Audit-Id: 23a92d7a-2949-4129-a8da-ac9d3dcb3881
	I1014 08:46:38.995039   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:39.489385   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:39.489385   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:39.489385   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:39.489385   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:39.494487   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:39.494487   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:39.494577   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:39.494577   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:39.494577   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:39.494577   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:39 GMT
	I1014 08:46:39.494577   15224 round_trippers.go:580]     Audit-Id: 16e8e9f0-8b63-4207-b310-3326dea741ff
	I1014 08:46:39.494577   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:39.494798   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:39.989393   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:39.989393   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:39.989393   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:39.989393   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:39.998914   15224 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1014 08:46:39.999002   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:39.999084   15224 round_trippers.go:580]     Audit-Id: cb42705c-9bee-49ed-98ac-dbd4cfe3f8c5
	I1014 08:46:39.999084   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:39.999084   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:39.999084   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:39.999084   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:39.999084   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:40 GMT
	I1014 08:46:39.999543   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:39.999813   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:40.489712   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:40.489712   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:40.489712   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:40.489712   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:40.495782   15224 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:46:40.495782   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:40.495782   15224 round_trippers.go:580]     Audit-Id: 3f6e5b78-2574-4765-bf70-84f927d22f4f
	I1014 08:46:40.495782   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:40.495782   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:40.495782   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:40.495782   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:40.495782   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:40 GMT
	I1014 08:46:40.496243   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:40.989460   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:40.989460   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:40.989460   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:40.989460   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:40.994352   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:40.994460   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:40.994588   15224 round_trippers.go:580]     Audit-Id: 45f911a3-9906-4ae9-b83d-2ab36d1e83b2
	I1014 08:46:40.994588   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:40.994609   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:40.994609   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:40.994609   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:40.994609   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:40 GMT
	I1014 08:46:40.994766   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:41.489414   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:41.489414   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:41.489414   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:41.489414   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:41.496474   15224 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 08:46:41.496474   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:41.496560   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:41.496560   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:41 GMT
	I1014 08:46:41.496560   15224 round_trippers.go:580]     Audit-Id: b492ec8a-176c-4ec1-9d5e-39e50903b41c
	I1014 08:46:41.496560   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:41.496560   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:41.496761   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:41.496862   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:41.989927   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:41.989927   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:41.989927   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:41.989927   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:41.994844   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:41.994948   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:41.994948   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:41.994948   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:41.994948   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:41 GMT
	I1014 08:46:41.994948   15224 round_trippers.go:580]     Audit-Id: ebe96fa1-0464-49c2-a2a5-755ec8aa99e0
	I1014 08:46:41.994948   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:41.994948   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:41.995294   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:42.489724   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:42.489724   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:42.489724   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:42.489724   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:42.494852   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:42.494852   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:42.494852   15224 round_trippers.go:580]     Audit-Id: 27e616b5-1b0f-44c6-822a-da6ff38ab34b
	I1014 08:46:42.494852   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:42.494852   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:42.494852   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:42.494852   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:42.494852   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:42 GMT
	I1014 08:46:42.495305   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:42.496428   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:42.989752   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:42.989752   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:42.989752   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:42.989752   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:43.002330   15224 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1014 08:46:43.002451   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:43.002451   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:43 GMT
	I1014 08:46:43.002451   15224 round_trippers.go:580]     Audit-Id: 6338e68a-d919-40cc-9cea-dc4b1b255779
	I1014 08:46:43.002451   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:43.002451   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:43.002451   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:43.002451   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:43.002852   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:43.490135   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:43.490135   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:43.490135   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:43.490135   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:43.498057   15224 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 08:46:43.498057   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:43.498057   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:43.498057   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:43.498057   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:43.498057   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:43.498057   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:43 GMT
	I1014 08:46:43.498057   15224 round_trippers.go:580]     Audit-Id: 0ec2c709-8b4b-4e66-a422-286a261c3534
	I1014 08:46:43.498057   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:43.989303   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:43.989303   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:43.989303   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:43.989303   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:43.995671   15224 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:46:43.995945   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:43.995945   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:43.995945   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:43.995945   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:43.995945   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:43 GMT
	I1014 08:46:43.995945   15224 round_trippers.go:580]     Audit-Id: 97d3b00a-fecc-4473-8932-a14671c84e57
	I1014 08:46:43.995994   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:43.996413   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:44.490156   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:44.490156   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:44.490156   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:44.490156   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:44.494415   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:44.494500   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:44.494500   15224 round_trippers.go:580]     Audit-Id: 94d462bf-1072-416b-907a-70500d4dad49
	I1014 08:46:44.494500   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:44.494500   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:44.494500   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:44.494500   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:44.494500   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:44 GMT
	I1014 08:46:44.494925   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:44.990178   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:44.990178   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:44.990178   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:44.990178   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:44.995338   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:44.995457   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:44.995457   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:44.995457   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:44.995457   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:44.995548   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:44 GMT
	I1014 08:46:44.995548   15224 round_trippers.go:580]     Audit-Id: 42adf9a8-1575-40c8-9f65-dabe8574908d
	I1014 08:46:44.995548   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:44.996081   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:44.996918   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:45.489909   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:45.489909   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:45.489909   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:45.489909   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:45.493927   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:45.494926   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:45.494926   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:45.494926   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:45 GMT
	I1014 08:46:45.494926   15224 round_trippers.go:580]     Audit-Id: 27646770-9000-4604-8a4a-bdb69bbd9c82
	I1014 08:46:45.494926   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:45.495035   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:45.495035   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:45.495035   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:45.989467   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:45.989467   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:45.989467   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:45.989467   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:45.993688   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:45.993736   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:45.993736   15224 round_trippers.go:580]     Audit-Id: 705aa739-1e38-498f-9171-44fae4701e8a
	I1014 08:46:45.993736   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:45.993736   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:45.993736   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:45.993736   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:45.993736   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:45 GMT
	I1014 08:46:45.994360   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:46.489297   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:46.489297   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:46.489297   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:46.489297   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:46.493802   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:46.493864   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:46.493864   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:46.493864   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:46.493864   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:46.493864   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:46.493936   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:46 GMT
	I1014 08:46:46.493936   15224 round_trippers.go:580]     Audit-Id: 0f9a51bd-3ffb-4247-9dd7-4ac41c6f2d8d
	I1014 08:46:46.494086   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:46.989529   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:46.989529   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:46.989529   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:46.989529   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:46.994311   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:46.994311   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:46.994311   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:46.994311   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:46.994311   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:46 GMT
	I1014 08:46:46.994311   15224 round_trippers.go:580]     Audit-Id: 1b363a92-3f81-4227-9fa2-eaecc3268d56
	I1014 08:46:46.994311   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:46.994311   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:46.994805   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:47.490760   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:47.490850   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:47.490850   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:47.490850   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:47.494140   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:46:47.495075   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:47.495075   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:47.495075   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:47.495075   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:47.495075   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:47.495075   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:47 GMT
	I1014 08:46:47.495075   15224 round_trippers.go:580]     Audit-Id: a8ce0b07-9b0b-453c-ac24-80d0830afdcf
	I1014 08:46:47.495504   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:47.496400   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:47.990019   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:47.990107   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:47.990107   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:47.990107   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:47.995328   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:47.995328   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:47.995408   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:47.995408   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:47 GMT
	I1014 08:46:47.995408   15224 round_trippers.go:580]     Audit-Id: 73ad273e-b8d5-47c8-b513-5ad2cbe15613
	I1014 08:46:47.995408   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:47.995408   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:47.995408   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:47.995758   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:48.489749   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:48.489749   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:48.489749   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:48.489749   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:48.495427   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:48.495427   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:48.495427   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:48 GMT
	I1014 08:46:48.495427   15224 round_trippers.go:580]     Audit-Id: f3c40b99-dded-4472-b23e-a851851b597a
	I1014 08:46:48.495427   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:48.495427   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:48.495427   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:48.495427   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:48.495688   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:48.989696   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:48.989696   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:48.989696   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:48.989696   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:48.993668   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:46:48.993766   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:48.993766   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:48.993766   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:48 GMT
	I1014 08:46:48.993830   15224 round_trippers.go:580]     Audit-Id: 857dbda7-3121-4648-9747-4c54502a5f60
	I1014 08:46:48.993830   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:48.993830   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:48.993830   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:48.994024   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:49.489817   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:49.489817   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:49.489817   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:49.489817   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:49.494302   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:49.494563   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:49.494563   15224 round_trippers.go:580]     Audit-Id: 3b3708e8-6803-4386-9a89-a4442fab2d53
	I1014 08:46:49.494563   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:49.494563   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:49.494563   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:49.494563   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:49.494678   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:49 GMT
	I1014 08:46:49.495008   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:49.990217   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:49.990217   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:49.990217   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:49.990217   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:49.995090   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:49.995170   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:49.995170   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:49 GMT
	I1014 08:46:49.995170   15224 round_trippers.go:580]     Audit-Id: c9d555a8-f657-47c9-9ae1-8bd3dab1daff
	I1014 08:46:49.995170   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:49.995170   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:49.995170   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:49.995283   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:49.996318   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:49.996986   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:50.489350   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:50.489350   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:50.489350   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:50.489350   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:50.494281   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:50.494281   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:50.494281   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:50 GMT
	I1014 08:46:50.494281   15224 round_trippers.go:580]     Audit-Id: 81abafa1-f88a-4b50-8876-6f8549149675
	I1014 08:46:50.494281   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:50.494281   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:50.494417   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:50.494417   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:50.494579   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:50.989973   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:50.990095   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:50.990095   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:50.990095   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:50.994061   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:46:50.994061   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:50.994061   15224 round_trippers.go:580]     Audit-Id: e02df781-3978-4ddd-a97c-d19007c16b3c
	I1014 08:46:50.994146   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:50.994146   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:50.994146   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:50.994146   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:50.994146   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:50 GMT
	I1014 08:46:50.994433   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:51.489477   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:51.489477   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:51.489477   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:51.489477   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:51.494749   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:51.494929   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:51.494929   15224 round_trippers.go:580]     Audit-Id: 8f81d5ea-52eb-406a-8cb2-d2f107e35d1d
	I1014 08:46:51.494929   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:51.494929   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:51.494929   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:51.494929   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:51.494929   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:51 GMT
	I1014 08:46:51.495350   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:51.989691   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:51.989691   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:51.989691   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:51.989691   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:51.994933   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:51.994933   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:51.994933   15224 round_trippers.go:580]     Audit-Id: bf7d1a75-7588-4c37-abd8-3fc85705e86f
	I1014 08:46:51.994933   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:51.994933   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:51.994933   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:51.994933   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:51.994933   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:51 GMT
	I1014 08:46:51.995467   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:52.490142   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:52.490290   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:52.490290   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:52.490290   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:52.495473   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:52.495602   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:52.495602   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:52.495602   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:52.495667   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:52.495667   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:52.495667   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:52 GMT
	I1014 08:46:52.495667   15224 round_trippers.go:580]     Audit-Id: d9052630-29a4-408f-8b7b-a2fb03a6c8f9
	I1014 08:46:52.495967   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:52.496634   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:52.989455   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:52.989455   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:52.989455   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:52.989455   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:52.994309   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:52.994309   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:52.994309   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:52.994309   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:52.994309   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:52.994309   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:52 GMT
	I1014 08:46:52.994309   15224 round_trippers.go:580]     Audit-Id: 36577b42-b797-4b67-80a5-12b0603607e8
	I1014 08:46:52.994309   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:52.994897   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:53.489335   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:53.489335   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:53.489335   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:53.489335   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:53.493499   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:53.493499   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:53.493499   15224 round_trippers.go:580]     Audit-Id: e3c5a0e8-38f5-428a-8d10-37d7cbd5deed
	I1014 08:46:53.493499   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:53.493499   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:53.493499   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:53.493499   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:53.493499   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:53 GMT
	I1014 08:46:53.493499   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:53.990296   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:53.990296   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:53.990296   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:53.990296   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:53.995720   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:53.995720   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:53.995720   15224 round_trippers.go:580]     Audit-Id: 07ef5a7c-c75c-4d89-8350-a4869eb60e78
	I1014 08:46:53.995826   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:53.995826   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:53.995826   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:53.995826   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:53.995826   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:53 GMT
	I1014 08:46:53.996085   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:54.489398   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:54.489398   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:54.489398   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:54.489398   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:54.494244   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:54.494347   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:54.494347   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:54.494347   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:54.494347   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:54.494457   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:54.494473   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:54 GMT
	I1014 08:46:54.494473   15224 round_trippers.go:580]     Audit-Id: 0ca8a6a3-bcfe-4af1-ad8a-169d3adbc2bd
	I1014 08:46:54.494862   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:54.989766   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:54.989766   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:54.989766   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:54.989766   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:54.994205   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:54.994304   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:54.994384   15224 round_trippers.go:580]     Audit-Id: 2c1df583-84ae-4275-b742-82222889c9b2
	I1014 08:46:54.994384   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:54.994384   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:54.994384   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:54.994461   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:54.994461   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:54 GMT
	I1014 08:46:54.994876   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:54.995247   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:55.489610   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:55.489610   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:55.489610   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:55.489610   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:55.494524   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:55.494671   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:55.494671   15224 round_trippers.go:580]     Audit-Id: 082f932d-ec8e-4a2a-ada3-508bd59c62a8
	I1014 08:46:55.494671   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:55.494671   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:55.494671   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:55.494781   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:55.494781   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:55 GMT
	I1014 08:46:55.495539   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:55.990329   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:55.990413   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:55.990413   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:55.990413   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:55.994348   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:46:55.994423   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:55.994423   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:55.994423   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:55.994423   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:55.994423   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:55 GMT
	I1014 08:46:55.994423   15224 round_trippers.go:580]     Audit-Id: 6620601b-e8ab-4b6a-9010-258a9911c717
	I1014 08:46:55.994423   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:55.994916   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:56.489406   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:56.489406   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:56.489406   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:56.489406   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:56.495291   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:56.495291   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:56.495291   15224 round_trippers.go:580]     Audit-Id: 95d3dfb4-cab8-4836-8ea2-1e246f19b191
	I1014 08:46:56.495291   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:56.495291   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:56.495291   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:56.495291   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:56.495291   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:56 GMT
	I1014 08:46:56.495757   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:56.990798   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:56.990868   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:56.990868   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:56.990868   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:56.995406   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:56.995406   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:56.995406   15224 round_trippers.go:580]     Audit-Id: b5c1d4bf-a5b4-4852-a272-aa39896b6296
	I1014 08:46:56.995406   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:56.995533   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:56.995533   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:56.995533   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:56.995533   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:56 GMT
	I1014 08:46:56.995999   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:56.996721   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:57.489734   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:57.490409   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:57.490409   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:57.490409   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:57.497223   15224 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:46:57.497223   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:57.497765   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:57.497765   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:57.497765   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:57 GMT
	I1014 08:46:57.497765   15224 round_trippers.go:580]     Audit-Id: fcbc9d95-4dc2-4e1a-a620-4af521199e00
	I1014 08:46:57.497765   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:57.497765   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:57.498052   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:57.989810   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:57.989810   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:57.989810   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:57.989810   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:57.995111   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:57.995111   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:57.995111   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:57.995111   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:57.995111   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:57.995111   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:57.995111   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:57 GMT
	I1014 08:46:57.995280   15224 round_trippers.go:580]     Audit-Id: 4222d4c4-e7f8-444e-8503-911380b5e0dd
	I1014 08:46:57.995437   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:58.490171   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:58.490171   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:58.490171   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:58.490171   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:58.495054   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:58.495054   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:58.495190   15224 round_trippers.go:580]     Audit-Id: 9368c534-c02c-432c-8400-5909cd499382
	I1014 08:46:58.495190   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:58.495190   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:58.495190   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:58.495190   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:58.495190   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:58 GMT
	I1014 08:46:58.495565   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:58.990986   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:58.990986   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:58.991105   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:58.991105   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:58.996046   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:58.996182   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:58.996182   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:58.996182   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:58.996182   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:58.996182   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:58 GMT
	I1014 08:46:58.996182   15224 round_trippers.go:580]     Audit-Id: c67f02f0-e95e-45d1-bc36-d41e98e658c4
	I1014 08:46:58.996182   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:58.996579   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:58.997219   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:59.490263   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:59.490263   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:59.490263   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:59.490263   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:59.495397   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:59.495482   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:59.495482   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:59.495571   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:59 GMT
	I1014 08:46:59.495571   15224 round_trippers.go:580]     Audit-Id: 5b632c91-9f87-46d1-a6cd-9f538d441472
	I1014 08:46:59.495571   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:59.495571   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:59.495571   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:59.495752   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:59.989450   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:59.989450   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:59.989450   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:59.989450   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:59.994158   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:59.994224   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:59.994224   15224 round_trippers.go:580]     Audit-Id: ce781060-8ec1-44c2-8c26-2d7adda6081d
	I1014 08:46:59.994224   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:59.994224   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:59.994224   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:59.994224   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:59.994224   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:59 GMT
	I1014 08:46:59.994675   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:46:59.995415   15224 node_ready.go:49] node "multinode-671000" has status "Ready":"True"
	I1014 08:46:59.995501   15224 node_ready.go:38] duration metric: took 39.0062602s for node "multinode-671000" to be "Ready" ...
	I1014 08:46:59.995624   15224 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 08:46:59.995728   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods
	I1014 08:46:59.995791   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:59.995834   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:59.995834   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:59.999596   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:00.000468   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:00.000468   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:00.000468   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:00.000551   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:00.000551   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:00 GMT
	I1014 08:47:00.000551   15224 round_trippers.go:580]     Audit-Id: 00661f32-59b5-4493-85d1-37c2d2ec69d5
	I1014 08:47:00.000551   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:00.001722   15224 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1981"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90485 chars]
	I1014 08:47:00.006322   15224 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace to be "Ready" ...
	I1014 08:47:00.006852   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:00.006852   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:00.006852   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:00.006852   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:00.010413   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:00.010413   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:00.010959   15224 round_trippers.go:580]     Audit-Id: 2a6f04ea-daa5-47f0-91c0-1cd22fd3fdef
	I1014 08:47:00.010959   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:00.010959   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:00.010959   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:00.010959   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:00.010959   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:00 GMT
	I1014 08:47:00.011125   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:00.011664   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:00.011946   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:00.011946   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:00.011946   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:00.015300   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:00.015300   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:00.015300   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:00.015300   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:00.015300   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:00.015300   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:00.015300   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:00 GMT
	I1014 08:47:00.015300   15224 round_trippers.go:580]     Audit-Id: 33c1a51e-e3c3-4e7f-a0c9-e9f655238198
	I1014 08:47:00.015300   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:00.506917   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:00.506917   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:00.506917   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:00.506917   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:00.511099   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:00.511845   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:00.511845   15224 round_trippers.go:580]     Audit-Id: e6063197-bc1a-4dbb-957c-6c1f96de4807
	I1014 08:47:00.511845   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:00.511845   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:00.511845   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:00.511845   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:00.511845   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:00 GMT
	I1014 08:47:00.512146   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:00.513106   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:00.513106   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:00.513106   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:00.513106   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:00.515410   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:00.516010   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:00.516057   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:00.516057   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:00.516057   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:00.516057   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:00 GMT
	I1014 08:47:00.516057   15224 round_trippers.go:580]     Audit-Id: 966850eb-f317-44d0-a477-5f237ba79d0a
	I1014 08:47:00.516057   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:00.516184   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:01.006773   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:01.006773   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:01.006773   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:01.006773   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:01.011420   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:01.011420   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:01.011420   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:01.011420   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:01 GMT
	I1014 08:47:01.011529   15224 round_trippers.go:580]     Audit-Id: 17612ab4-44ae-468f-a147-4fd39fa3429b
	I1014 08:47:01.011529   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:01.011529   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:01.011529   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:01.011628   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:01.012748   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:01.012748   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:01.012836   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:01.012836   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:01.015372   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:01.016372   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:01.016372   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:01.016372   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:01 GMT
	I1014 08:47:01.016372   15224 round_trippers.go:580]     Audit-Id: c794ac18-a749-4264-938e-a5ece5b88a3c
	I1014 08:47:01.016372   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:01.016372   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:01.016372   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:01.016858   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:01.506525   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:01.506525   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:01.506525   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:01.506525   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:01.515311   15224 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1014 08:47:01.515500   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:01.515607   15224 round_trippers.go:580]     Audit-Id: 90af5554-ab19-4a16-9d30-debf4eee213c
	I1014 08:47:01.515629   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:01.515629   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:01.515629   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:01.515629   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:01.515629   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:01 GMT
	I1014 08:47:01.515629   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:01.516665   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:01.516665   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:01.516840   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:01.516840   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:01.522253   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:01.522253   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:01.522253   15224 round_trippers.go:580]     Audit-Id: 318e2a8f-9313-497b-880c-d640e0c4ccda
	I1014 08:47:01.522253   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:01.522253   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:01.522253   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:01.522253   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:01.522253   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:01 GMT
	I1014 08:47:01.522956   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:02.006385   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:02.007063   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:02.007063   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:02.007063   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:02.011482   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:02.011552   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:02.011629   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:02.011629   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:02.011629   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:02.011629   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:02.011629   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:02 GMT
	I1014 08:47:02.011629   15224 round_trippers.go:580]     Audit-Id: ac630c37-4e6f-483e-8137-1abf2e45cbd9
	I1014 08:47:02.012066   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:02.012970   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:02.013039   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:02.013039   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:02.013039   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:02.015828   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:02.015828   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:02.015828   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:02.015828   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:02 GMT
	I1014 08:47:02.015828   15224 round_trippers.go:580]     Audit-Id: 06e8133b-4791-4a3b-a538-6c542d2a8c22
	I1014 08:47:02.015828   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:02.015828   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:02.015828   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:02.016237   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:02.016774   15224 pod_ready.go:103] pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace has status "Ready":"False"
	I1014 08:47:02.507142   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:02.507226   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:02.507226   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:02.507226   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:02.511342   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:02.511342   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:02.511436   15224 round_trippers.go:580]     Audit-Id: 9f7a3529-fdca-43f7-8b1b-61d8f29d36ad
	I1014 08:47:02.511436   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:02.511436   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:02.511436   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:02.511436   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:02.511436   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:02 GMT
	I1014 08:47:02.512091   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:02.512959   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:02.513014   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:02.513014   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:02.513014   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:02.515897   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:02.515897   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:02.515897   15224 round_trippers.go:580]     Audit-Id: 102608cd-9d1b-404d-a652-368ac49fdc82
	I1014 08:47:02.515897   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:02.515897   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:02.515897   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:02.515897   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:02.515897   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:02 GMT
	I1014 08:47:02.515897   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:03.007073   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:03.007073   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:03.007073   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:03.007073   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:03.012190   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:03.012190   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:03.012190   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:03.012190   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:03.012349   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:03.012349   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:03.012349   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:03 GMT
	I1014 08:47:03.012349   15224 round_trippers.go:580]     Audit-Id: 23145347-e683-4d7a-814b-569f0c15a257
	I1014 08:47:03.012522   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:03.013465   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:03.013524   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:03.013524   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:03.013524   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:03.022157   15224 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1014 08:47:03.022157   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:03.022157   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:03 GMT
	I1014 08:47:03.022157   15224 round_trippers.go:580]     Audit-Id: 0304324c-83e4-4b30-a45f-95b24417cab2
	I1014 08:47:03.022157   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:03.022157   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:03.022157   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:03.022157   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:03.023151   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:03.507020   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:03.507020   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:03.507020   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:03.507020   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:03.515852   15224 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1014 08:47:03.515852   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:03.515852   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:03.515852   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:03.515852   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:03.515852   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:03 GMT
	I1014 08:47:03.515852   15224 round_trippers.go:580]     Audit-Id: 00ce8782-b3c5-418b-8a83-3b4eba2ad8da
	I1014 08:47:03.515852   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:03.515852   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:03.517418   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:03.517418   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:03.517418   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:03.517418   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:03.520702   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:03.520702   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:03.520702   15224 round_trippers.go:580]     Audit-Id: db73e000-7e18-4dbe-9c67-ebba8fb8f343
	I1014 08:47:03.520702   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:03.520702   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:03.520702   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:03.520702   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:03.520702   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:03 GMT
	I1014 08:47:03.520702   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:04.007090   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:04.007090   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:04.007090   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:04.007090   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:04.013222   15224 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:47:04.013222   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:04.013222   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:04 GMT
	I1014 08:47:04.013222   15224 round_trippers.go:580]     Audit-Id: 37596bf1-027e-4a33-804b-95394d501f4d
	I1014 08:47:04.013222   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:04.013222   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:04.013222   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:04.013222   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:04.013655   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:04.014633   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:04.014633   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:04.014633   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:04.014633   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:04.017871   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:04.017959   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:04.017959   15224 round_trippers.go:580]     Audit-Id: 9af1f259-13d3-472f-85bb-da8201f00842
	I1014 08:47:04.017959   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:04.017959   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:04.017959   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:04.017959   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:04.017959   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:04 GMT
	I1014 08:47:04.017959   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:04.018929   15224 pod_ready.go:103] pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace has status "Ready":"False"
	I1014 08:47:04.507142   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:04.507223   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:04.507223   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:04.507223   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:04.512321   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:04.512402   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:04.512508   15224 round_trippers.go:580]     Audit-Id: f2852b26-65ec-4e52-adb6-8e3f8bcf790b
	I1014 08:47:04.512508   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:04.512508   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:04.512508   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:04.512508   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:04.512572   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:04 GMT
	I1014 08:47:04.512761   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:04.513530   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:04.513530   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:04.513604   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:04.513604   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:04.516658   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:04.516658   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:04.516767   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:04.516767   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:04.516767   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:04 GMT
	I1014 08:47:04.516767   15224 round_trippers.go:580]     Audit-Id: a2b83566-4bad-4a91-8aa9-80ed347dabf6
	I1014 08:47:04.516767   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:04.516767   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:04.517095   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:05.007774   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:05.007853   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:05.007853   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:05.007853   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:05.011679   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:05.011679   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:05.011799   15224 round_trippers.go:580]     Audit-Id: 708ac586-f5ff-4af7-99d6-c53fb95089c3
	I1014 08:47:05.011799   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:05.011799   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:05.011799   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:05.011799   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:05.011903   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:05 GMT
	I1014 08:47:05.012191   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:05.013122   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:05.013122   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:05.013122   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:05.013122   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:05.016764   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:05.016872   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:05.016872   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:05.016872   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:05 GMT
	I1014 08:47:05.016872   15224 round_trippers.go:580]     Audit-Id: f9e1ee8e-1181-4ec9-b450-43010ae103d9
	I1014 08:47:05.016872   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:05.016872   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:05.016872   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:05.017839   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:05.507030   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:05.507706   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:05.507706   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:05.507706   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:05.512023   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:05.512023   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:05.512023   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:05.512023   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:05.512096   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:05.512096   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:05 GMT
	I1014 08:47:05.512096   15224 round_trippers.go:580]     Audit-Id: a2f37341-f157-495f-a7ce-1d46bbabc594
	I1014 08:47:05.512096   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:05.512325   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:05.513009   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:05.513009   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:05.513178   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:05.513178   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:05.516913   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:05.516970   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:05.516970   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:05 GMT
	I1014 08:47:05.517020   15224 round_trippers.go:580]     Audit-Id: 1cdfac50-9b2f-44bf-81de-280089b69120
	I1014 08:47:05.517020   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:05.517020   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:05.517020   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:05.517020   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:05.517479   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:06.007350   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:06.008037   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:06.008037   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:06.008037   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:06.013381   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:06.013502   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:06.013502   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:06.013502   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:06 GMT
	I1014 08:47:06.013643   15224 round_trippers.go:580]     Audit-Id: 3e765249-e89d-4ddc-8f0f-dc2eb05205e0
	I1014 08:47:06.013643   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:06.013643   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:06.013643   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:06.013933   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:06.014886   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:06.014943   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:06.014943   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:06.014943   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:06.018169   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:06.018169   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:06.018235   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:06.018235   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:06.018235   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:06.018235   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:06 GMT
	I1014 08:47:06.018235   15224 round_trippers.go:580]     Audit-Id: 69d90dd1-8c46-4162-ac62-55df851ff11c
	I1014 08:47:06.018235   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:06.018689   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:06.019232   15224 pod_ready.go:103] pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace has status "Ready":"False"
	I1014 08:47:06.506428   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:06.506428   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:06.506428   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:06.506428   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:06.511682   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:06.511766   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:06.511766   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:06.511766   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:06.511766   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:06 GMT
	I1014 08:47:06.511766   15224 round_trippers.go:580]     Audit-Id: fd06389c-4170-4969-b3ef-cc937b6dc64c
	I1014 08:47:06.511766   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:06.511766   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:06.512051   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:06.513133   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:06.513226   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:06.513226   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:06.513297   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:06.518571   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:06.518571   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:06.518571   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:06.518571   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:06.518571   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:06.518571   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:06 GMT
	I1014 08:47:06.518571   15224 round_trippers.go:580]     Audit-Id: ab8a2e71-e016-4381-94bf-4801cc8f440c
	I1014 08:47:06.518571   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:06.519092   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:07.006749   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:07.006749   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:07.006749   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:07.006749   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:07.011571   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:07.011667   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:07.011667   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:07.011667   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:07 GMT
	I1014 08:47:07.011667   15224 round_trippers.go:580]     Audit-Id: c218ebbe-21a8-4892-9852-1c01dfffcc96
	I1014 08:47:07.011667   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:07.011667   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:07.011667   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:07.011914   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:07.012147   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:07.012745   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:07.012745   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:07.012745   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:07.015174   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:07.015174   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:07.015174   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:07 GMT
	I1014 08:47:07.015174   15224 round_trippers.go:580]     Audit-Id: b1e11afd-9a6b-43b2-83c4-46902b554b7e
	I1014 08:47:07.015726   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:07.015726   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:07.015726   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:07.015726   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:07.015790   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:07.507147   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:07.507147   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:07.507147   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:07.507147   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:07.512494   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:07.512515   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:07.512515   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:07 GMT
	I1014 08:47:07.512515   15224 round_trippers.go:580]     Audit-Id: 31a843f6-ed99-44d1-b93d-3f93e09d9add
	I1014 08:47:07.512576   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:07.512576   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:07.512576   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:07.512576   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:07.512847   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:07.513892   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:07.513892   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:07.513892   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:07.513892   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:07.516537   15224 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1014 08:47:07.516590   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:07.516590   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:07 GMT
	I1014 08:47:07.516590   15224 round_trippers.go:580]     Audit-Id: 0d6c10fb-d992-49c0-9edc-5d660ea93dd2
	I1014 08:47:07.516590   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:07.516590   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:07.516590   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:07.516590   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:07.516835   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:08.007721   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:08.007721   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:08.007721   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:08.007825   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:08.012247   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:08.012450   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:08.012450   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:08.012450   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:08.012450   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:08.012450   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:08 GMT
	I1014 08:47:08.012450   15224 round_trippers.go:580]     Audit-Id: 520ff5a4-4bfd-40d1-a319-20c62f138073
	I1014 08:47:08.012450   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:08.012730   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:08.013718   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:08.013786   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:08.013786   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:08.013786   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:08.016793   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:08.016793   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:08.016887   15224 round_trippers.go:580]     Audit-Id: 4cbed752-f465-4e96-b986-f1a19c1c9c0d
	I1014 08:47:08.016887   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:08.016887   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:08.016887   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:08.016887   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:08.016887   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:08 GMT
	I1014 08:47:08.017219   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:08.507121   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:08.507711   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:08.507711   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:08.507711   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:08.511937   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:08.512056   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:08.512056   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:08 GMT
	I1014 08:47:08.512056   15224 round_trippers.go:580]     Audit-Id: e8ca3373-f077-463a-b9d5-c452fab90974
	I1014 08:47:08.512184   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:08.512184   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:08.512184   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:08.512184   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:08.512334   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:08.513386   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:08.513386   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:08.513386   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:08.513386   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:08.519452   15224 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:47:08.519452   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:08.519452   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:08.519452   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:08 GMT
	I1014 08:47:08.519452   15224 round_trippers.go:580]     Audit-Id: ced7667e-6c66-46a4-9853-0a33235d155d
	I1014 08:47:08.519452   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:08.519452   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:08.519452   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:08.519452   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:08.520308   15224 pod_ready.go:103] pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace has status "Ready":"False"
	I1014 08:47:09.006595   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:09.006595   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:09.006595   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:09.006595   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:09.011936   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:09.011988   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:09.011988   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:09.011988   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:09.011988   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:09.011988   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:09.011988   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:09 GMT
	I1014 08:47:09.011988   15224 round_trippers.go:580]     Audit-Id: c484f081-83c1-4be0-adc2-7aa92f0a3dc6
	I1014 08:47:09.011988   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:09.012716   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:09.012716   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:09.012716   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:09.012716   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:09.015695   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:09.015695   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:09.015695   15224 round_trippers.go:580]     Audit-Id: 8bd75b39-2234-4b8b-9013-177a866df8eb
	I1014 08:47:09.015695   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:09.015817   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:09.015817   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:09.015817   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:09.015817   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:09 GMT
	I1014 08:47:09.015989   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:09.506435   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:09.506435   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:09.506435   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:09.506435   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:09.513063   15224 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:47:09.513145   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:09.513145   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:09.513145   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:09 GMT
	I1014 08:47:09.513145   15224 round_trippers.go:580]     Audit-Id: 06f28490-94ff-40dc-99bb-5cb85f73a931
	I1014 08:47:09.513280   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:09.513280   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:09.513280   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:09.513662   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:09.514569   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:09.514664   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:09.514664   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:09.514664   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:09.518160   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:09.518160   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:09.518160   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:09 GMT
	I1014 08:47:09.518160   15224 round_trippers.go:580]     Audit-Id: c6525d6e-9818-4d1a-90c6-7806fadb3ce2
	I1014 08:47:09.518160   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:09.518470   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:09.518540   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:09.518540   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:09.518827   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:10.006970   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:10.006970   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:10.006970   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:10.006970   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:10.011008   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:10.011088   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:10.011088   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:10.011088   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:10.011088   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:10 GMT
	I1014 08:47:10.011088   15224 round_trippers.go:580]     Audit-Id: b2e32801-8b13-4cf7-b163-91bc80134065
	I1014 08:47:10.011088   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:10.011088   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:10.011350   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:10.012213   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:10.012293   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:10.012293   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:10.012293   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:10.013972   15224 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1014 08:47:10.014713   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:10.014713   15224 round_trippers.go:580]     Audit-Id: c48f83b8-1463-4288-81b2-41209700f82a
	I1014 08:47:10.014713   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:10.014713   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:10.014713   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:10.014713   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:10.014713   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:10 GMT
	I1014 08:47:10.015098   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:10.506538   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:10.506538   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:10.506538   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:10.506538   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:10.511110   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:10.511178   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:10.511178   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:10.511178   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:10 GMT
	I1014 08:47:10.511178   15224 round_trippers.go:580]     Audit-Id: b6155baf-4833-4d66-8844-2fad966eab08
	I1014 08:47:10.511178   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:10.511178   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:10.511178   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:10.511486   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:10.512394   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:10.512394   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:10.512394   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:10.512394   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:10.515291   15224 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1014 08:47:10.515335   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:10.515335   15224 round_trippers.go:580]     Audit-Id: c8c32713-140e-4e16-b02c-5eb874c6be6c
	I1014 08:47:10.515335   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:10.515335   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:10.515382   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:10.515382   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:10.515382   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:10 GMT
	I1014 08:47:10.515723   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:11.006435   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:11.006435   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:11.006435   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:11.006435   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:11.011127   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:11.011127   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:11.011127   15224 round_trippers.go:580]     Audit-Id: 9c151fd3-37d0-4147-b731-61c1b378ba84
	I1014 08:47:11.011127   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:11.011127   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:11.011127   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:11.011127   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:11.011127   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:11 GMT
	I1014 08:47:11.011361   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:11.012761   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:11.012761   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:11.012946   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:11.012946   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:11.015307   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:11.015307   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:11.015307   15224 round_trippers.go:580]     Audit-Id: db06ee2f-d639-4669-91eb-2360749abc27
	I1014 08:47:11.015307   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:11.015307   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:11.015532   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:11.015532   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:11.015532   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:11 GMT
	I1014 08:47:11.015860   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:11.016241   15224 pod_ready.go:103] pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace has status "Ready":"False"
	I1014 08:47:11.507392   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:11.507392   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:11.507392   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:11.507392   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:11.512533   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:11.512620   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:11.512620   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:11.512620   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:11 GMT
	I1014 08:47:11.512620   15224 round_trippers.go:580]     Audit-Id: bac9f82c-315d-440c-8078-0b0e4a0ee41c
	I1014 08:47:11.512620   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:11.512620   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:11.512620   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:11.513007   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:11.513722   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:11.513722   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:11.513722   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:11.513722   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:11.518444   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:11.518444   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:11.518444   15224 round_trippers.go:580]     Audit-Id: c838a67a-b881-43ce-ad4f-6a87e4c89a4c
	I1014 08:47:11.518444   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:11.518444   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:11.518444   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:11.518444   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:11.518444   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:11 GMT
	I1014 08:47:11.518444   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:12.006582   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:12.006582   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:12.007326   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:12.007326   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:12.012393   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:12.012393   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:12.012393   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:12.012393   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:12.012393   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:12 GMT
	I1014 08:47:12.012393   15224 round_trippers.go:580]     Audit-Id: 34e82f0d-ec81-4dee-bea3-e36516aa5f0d
	I1014 08:47:12.012393   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:12.012393   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:12.012721   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:12.013553   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:12.013553   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:12.013651   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:12.013651   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:12.017785   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:12.017785   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:12.017785   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:12.017785   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:12.017785   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:12 GMT
	I1014 08:47:12.017785   15224 round_trippers.go:580]     Audit-Id: b2eb7e49-fe42-4555-b1bf-879ffb0ae3ba
	I1014 08:47:12.017785   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:12.017785   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:12.018477   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:12.507323   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:12.507423   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:12.507423   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:12.507423   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:12.512108   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:12.512207   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:12.512207   15224 round_trippers.go:580]     Audit-Id: 546f5a31-2c3c-47ff-8a44-f2843dce4a5e
	I1014 08:47:12.512207   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:12.512207   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:12.512207   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:12.512347   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:12.512424   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:12 GMT
	I1014 08:47:12.512488   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:12.513507   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:12.513568   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:12.513568   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:12.513568   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:12.517013   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:12.517142   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:12.517142   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:12 GMT
	I1014 08:47:12.517142   15224 round_trippers.go:580]     Audit-Id: 26e64686-e758-4892-81fd-55e324997e47
	I1014 08:47:12.517142   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:12.517142   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:12.517142   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:12.517142   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:12.517437   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:13.006520   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:13.006520   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:13.006520   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:13.006520   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:13.010891   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:13.010891   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:13.010891   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:13.010891   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:13.010891   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:13 GMT
	I1014 08:47:13.010891   15224 round_trippers.go:580]     Audit-Id: 6a9084a1-6c4c-4c8d-99d6-65142ce35539
	I1014 08:47:13.010891   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:13.010891   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:13.011275   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:13.012142   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:13.012246   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:13.012246   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:13.012246   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:13.014315   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:13.015307   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:13.015307   15224 round_trippers.go:580]     Audit-Id: f4309c0f-831d-4770-9f1e-e131d7b0f9b4
	I1014 08:47:13.015307   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:13.015307   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:13.015307   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:13.015307   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:13.015307   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:13 GMT
	I1014 08:47:13.015481   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:13.016424   15224 pod_ready.go:103] pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace has status "Ready":"False"
	I1014 08:47:13.506417   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:13.507223   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:13.507223   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:13.507223   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:13.511693   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:13.511817   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:13.511817   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:13.511817   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:13.511817   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:13 GMT
	I1014 08:47:13.511817   15224 round_trippers.go:580]     Audit-Id: 7fad4c8b-b30e-498b-9ec4-5606a8ade29c
	I1014 08:47:13.511817   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:13.511899   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:13.512089   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:13.513074   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:13.513146   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:13.513146   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:13.513146   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:13.516684   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:13.516684   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:13.516684   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:13 GMT
	I1014 08:47:13.516684   15224 round_trippers.go:580]     Audit-Id: 8f255108-f0af-4a99-9143-fb21a67f899e
	I1014 08:47:13.516684   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:13.516684   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:13.516684   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:13.516684   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:13.516999   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:14.007413   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:14.007413   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:14.007413   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:14.007413   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:14.012790   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:14.012790   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:14.012790   15224 round_trippers.go:580]     Audit-Id: 78f8ee80-1015-40f0-97f5-0981b36b4386
	I1014 08:47:14.012790   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:14.012790   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:14.012790   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:14.012790   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:14.012790   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:14 GMT
	I1014 08:47:14.012790   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:14.014159   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:14.014159   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:14.014159   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:14.014159   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:14.017212   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:14.017212   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:14.017212   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:14.017212   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:14 GMT
	I1014 08:47:14.017212   15224 round_trippers.go:580]     Audit-Id: c14f110c-e4e6-46fd-8da0-9ac9a8be1e50
	I1014 08:47:14.017212   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:14.017212   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:14.017212   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:14.017212   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:14.507162   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:14.507162   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:14.507162   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:14.507162   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:14.513064   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:14.513064   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:14.513064   15224 round_trippers.go:580]     Audit-Id: 62f7ba62-de7d-4c73-84ad-30979148efb0
	I1014 08:47:14.513064   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:14.513170   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:14.513170   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:14.513170   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:14.513170   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:14 GMT
	I1014 08:47:14.513471   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:14.514151   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:14.514151   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:14.514337   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:14.514337   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:14.519788   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:14.519788   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:14.519788   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:14 GMT
	I1014 08:47:14.519788   15224 round_trippers.go:580]     Audit-Id: a51cfb8e-c4d2-4f0d-b13a-26340550ffa1
	I1014 08:47:14.519892   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:14.519892   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:14.519919   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:14.519950   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:14.519950   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:15.007661   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:15.007661   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:15.007661   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:15.007661   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:15.012074   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:15.012074   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:15.012074   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:15.012074   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:15 GMT
	I1014 08:47:15.012074   15224 round_trippers.go:580]     Audit-Id: baed5bb1-35d7-47bb-86e2-0aa2501c8146
	I1014 08:47:15.012074   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:15.012074   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:15.012074   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:15.012380   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:15.013197   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:15.013197   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:15.013362   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:15.013362   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:15.018628   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:15.018628   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:15.018628   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:15.018628   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:15.018628   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:15.018628   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:15 GMT
	I1014 08:47:15.018628   15224 round_trippers.go:580]     Audit-Id: b8ca27a2-03fc-477b-a15a-95e8a6c0c70d
	I1014 08:47:15.018628   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:15.018628   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:15.019476   15224 pod_ready.go:103] pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace has status "Ready":"False"
	I1014 08:47:15.506477   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:15.506477   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:15.506477   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:15.506477   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:15.511166   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:15.511166   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:15.511166   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:15.511307   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:15.511307   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:15.511307   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:15 GMT
	I1014 08:47:15.511307   15224 round_trippers.go:580]     Audit-Id: e916cecb-826d-4354-af10-9cabb28bd69a
	I1014 08:47:15.511307   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:15.511451   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:15.511892   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:15.511892   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:15.511892   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:15.511892   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:15.520011   15224 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 08:47:15.520046   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:15.520105   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:15.520105   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:15 GMT
	I1014 08:47:15.520105   15224 round_trippers.go:580]     Audit-Id: 33f64d34-16aa-4a43-967b-b926d5f98321
	I1014 08:47:15.520105   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:15.520105   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:15.520105   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:15.520105   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:16.006631   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:16.006631   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:16.006631   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:16.006631   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:16.011620   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:16.011620   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:16.011620   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:16 GMT
	I1014 08:47:16.011620   15224 round_trippers.go:580]     Audit-Id: 31e70c7d-5221-43ea-8127-e8536f18b112
	I1014 08:47:16.011742   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:16.011742   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:16.011742   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:16.011742   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:16.011957   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:16.013142   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:16.013248   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:16.013248   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:16.013248   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:16.015444   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:16.015444   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:16.016004   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:16.016004   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:16.016004   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:16 GMT
	I1014 08:47:16.016004   15224 round_trippers.go:580]     Audit-Id: 0d2c6a23-82eb-4de2-a65b-9c5ccd41bf3c
	I1014 08:47:16.016004   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:16.016004   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:16.016488   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:16.508222   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:16.508222   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:16.508222   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:16.508222   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:16.513132   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:16.513276   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:16.513276   15224 round_trippers.go:580]     Audit-Id: 92cefe22-a976-4d40-8ef5-1fd0de7c9281
	I1014 08:47:16.513379   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:16.513379   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:16.513379   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:16.513379   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:16.513422   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:16 GMT
	I1014 08:47:16.513422   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:16.514218   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:16.514218   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:16.514218   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:16.514218   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:16.520652   15224 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:47:16.520652   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:16.520652   15224 round_trippers.go:580]     Audit-Id: 9f45b35e-3d7b-48c1-8226-c0835e02ceb7
	I1014 08:47:16.520652   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:16.520652   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:16.520652   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:16.520652   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:16.520652   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:16 GMT
	I1014 08:47:16.520811   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:17.006825   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:17.006825   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:17.006825   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:17.006825   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:17.010058   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:17.011080   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:17.011133   15224 round_trippers.go:580]     Audit-Id: 18db3927-457f-4476-a646-5e92011f1be4
	I1014 08:47:17.011133   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:17.011133   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:17.011133   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:17.011133   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:17.011133   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:17 GMT
	I1014 08:47:17.011413   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:17.012320   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:17.012320   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:17.012320   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:17.012320   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:17.014796   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:17.014796   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:17.014796   15224 round_trippers.go:580]     Audit-Id: 696cd3c8-b82b-4fbd-bc5f-e9c53892286d
	I1014 08:47:17.015321   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:17.015321   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:17.015321   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:17.015321   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:17.015321   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:17 GMT
	I1014 08:47:17.015614   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:17.507109   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:17.507232   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:17.507232   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:17.507232   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:17.511533   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:17.511533   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:17.511533   15224 round_trippers.go:580]     Audit-Id: f7633e99-25ed-4fd3-8d31-8bd181530254
	I1014 08:47:17.511533   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:17.511657   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:17.511657   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:17.511657   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:17.511657   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:17 GMT
	I1014 08:47:17.511965   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:17.512698   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:17.512825   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:17.512825   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:17.512825   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:17.515118   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:17.515118   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:17.515118   15224 round_trippers.go:580]     Audit-Id: 7cd020d0-aead-43af-8a37-482b65d69e01
	I1014 08:47:17.516068   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:17.516068   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:17.516068   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:17.516068   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:17.516068   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:17 GMT
	I1014 08:47:17.516560   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:17.517068   15224 pod_ready.go:103] pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace has status "Ready":"False"
	I1014 08:47:18.007225   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:18.007885   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:18.007885   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:18.007885   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:18.011998   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:18.011998   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:18.011998   15224 round_trippers.go:580]     Audit-Id: 9a8c58e9-7460-4b9e-9090-2e9a6e238080
	I1014 08:47:18.011998   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:18.012143   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:18.012143   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:18.012143   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:18.012143   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:18 GMT
	I1014 08:47:18.012383   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:18.013224   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:18.013224   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:18.013305   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:18.013305   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:18.019345   15224 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:47:18.019345   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:18.019345   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:18.019345   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:18.019345   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:18.019345   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:18.019345   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:18 GMT
	I1014 08:47:18.019345   15224 round_trippers.go:580]     Audit-Id: 7c2d2856-b25d-4da1-91f1-53542397cdf3
	I1014 08:47:18.019345   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:18.507458   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:18.507554   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:18.507554   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:18.507554   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:18.514851   15224 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 08:47:18.514851   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:18.514851   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:18.514851   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:18.514851   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:18.514851   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:18.514851   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:18 GMT
	I1014 08:47:18.514851   15224 round_trippers.go:580]     Audit-Id: cdaf01ce-867d-4c99-bc82-f89879c71827
	I1014 08:47:18.514851   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:18.514851   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:18.514851   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:18.514851   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:18.514851   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:18.518408   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:18.518408   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:18.518408   15224 round_trippers.go:580]     Audit-Id: d74c1796-3bbd-4b83-974c-c1a06e450acf
	I1014 08:47:18.518408   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:18.518408   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:18.518408   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:18.518408   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:18.518408   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:18 GMT
	I1014 08:47:18.518408   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:19.007253   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:19.007253   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:19.007253   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:19.007253   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:19.011995   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:19.012103   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:19.012182   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:19.012182   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:19.012182   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:19.012182   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:19 GMT
	I1014 08:47:19.012182   15224 round_trippers.go:580]     Audit-Id: 6255e6ac-a168-4219-a867-de075f16566a
	I1014 08:47:19.012182   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:19.012417   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:19.013173   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:19.013173   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:19.013173   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:19.013253   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:19.020207   15224 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:47:19.020207   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:19.020207   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:19 GMT
	I1014 08:47:19.020207   15224 round_trippers.go:580]     Audit-Id: 91df407e-0025-4337-a011-0fa51e27fecd
	I1014 08:47:19.020303   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:19.020303   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:19.020303   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:19.020303   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:19.020625   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:19.507671   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:19.507755   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:19.507755   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:19.507755   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:19.512040   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:19.512137   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:19.512137   15224 round_trippers.go:580]     Audit-Id: fb270a46-2a7d-4c0d-94be-b49625fa56f8
	I1014 08:47:19.512137   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:19.512137   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:19.512137   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:19.512304   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:19.512304   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:19 GMT
	I1014 08:47:19.512447   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:19.513603   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:19.513920   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:19.514041   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:19.514041   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:19.518839   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:19.518886   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:19.518886   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:19.518886   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:19.518886   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:19.518886   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:19.518886   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:19 GMT
	I1014 08:47:19.518886   15224 round_trippers.go:580]     Audit-Id: 5b4168cc-f28b-4b94-93da-8a92474c4810
	I1014 08:47:19.518886   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:19.519590   15224 pod_ready.go:103] pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace has status "Ready":"False"
	I1014 08:47:20.006817   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:20.006817   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:20.006817   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:20.006817   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:20.011443   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:20.011543   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:20.011543   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:20.011543   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:20.011543   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:20.011635   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:20.011635   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:20 GMT
	I1014 08:47:20.011635   15224 round_trippers.go:580]     Audit-Id: 44719273-604e-405a-b11e-01c0640de86a
	I1014 08:47:20.011635   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:20.012486   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:20.012486   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:20.012574   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:20.012574   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:20.015859   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:20.016039   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:20.016039   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:20 GMT
	I1014 08:47:20.016039   15224 round_trippers.go:580]     Audit-Id: 4d6b5e39-ffe1-446f-9477-0545b059dffb
	I1014 08:47:20.016039   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:20.016039   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:20.016103   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:20.016103   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:20.016497   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:20.506678   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:20.506678   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:20.506678   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:20.506678   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:20.510833   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:20.511227   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:20.511304   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:20.511304   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:20.511304   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:20 GMT
	I1014 08:47:20.511336   15224 round_trippers.go:580]     Audit-Id: 68f65e29-6ed5-4c5b-8c44-0f46bd80731c
	I1014 08:47:20.511336   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:20.511336   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:20.511336   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:20.512129   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:20.512129   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:20.512129   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:20.512129   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:20.515890   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:20.515890   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:20.515890   15224 round_trippers.go:580]     Audit-Id: e481aaa4-6960-411e-89e9-1e9183156bb6
	I1014 08:47:20.515890   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:20.515890   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:20.515890   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:20.515890   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:20.515890   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:20 GMT
	I1014 08:47:20.516719   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:21.007016   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:21.007016   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:21.007016   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:21.007016   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:21.010237   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:21.010237   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:21.011027   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:21.011027   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:21 GMT
	I1014 08:47:21.011027   15224 round_trippers.go:580]     Audit-Id: 47b87774-7090-483b-9926-39ccad3716e5
	I1014 08:47:21.011027   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:21.011027   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:21.011027   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:21.011186   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1996","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7275 chars]
	I1014 08:47:21.012374   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:21.012422   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:21.012458   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:21.012458   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:21.032758   15224 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I1014 08:47:21.032758   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:21.032758   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:21.032758   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:21.032758   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:21.032758   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:21.032758   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:21 GMT
	I1014 08:47:21.032758   15224 round_trippers.go:580]     Audit-Id: 1571e1d2-fe3c-4d0f-9e64-214a36f36698
	I1014 08:47:21.032758   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:21.507257   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:21.507257   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:21.507257   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:21.507257   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:21.511829   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:21.511904   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:21.511904   15224 round_trippers.go:580]     Audit-Id: 907c98fb-199a-4b1f-befc-1db794bb880e
	I1014 08:47:21.511904   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:21.511904   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:21.511904   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:21.511904   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:21.511904   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:21 GMT
	I1014 08:47:21.512515   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1996","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7275 chars]
	I1014 08:47:21.513885   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:21.513885   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:21.513885   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:21.513885   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:21.517247   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:21.517299   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:21.517299   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:21 GMT
	I1014 08:47:21.517299   15224 round_trippers.go:580]     Audit-Id: ef88c98f-83a7-4328-a949-cc9eb37c3d0e
	I1014 08:47:21.517299   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:21.517299   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:21.517299   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:21.517388   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:21.517692   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:22.007412   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:22.007412   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:22.007412   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:22.007412   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:22.012763   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:22.012900   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:22.012900   15224 round_trippers.go:580]     Audit-Id: 45455627-5f1f-4676-8d3e-5703470425b1
	I1014 08:47:22.012900   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:22.012900   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:22.012900   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:22.012900   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:22.012900   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:22 GMT
	I1014 08:47:22.013364   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1996","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7275 chars]
	I1014 08:47:22.014066   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:22.014066   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:22.014066   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:22.014066   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:22.017535   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:22.017535   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:22.017535   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:22.017649   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:22.017649   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:22.017672   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:22 GMT
	I1014 08:47:22.017672   15224 round_trippers.go:580]     Audit-Id: 48e1e5e9-abf3-497d-a3da-5e0cec144c2c
	I1014 08:47:22.017672   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:22.017807   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:22.018437   15224 pod_ready.go:103] pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace has status "Ready":"False"
	I1014 08:47:22.507392   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:22.507529   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:22.507529   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:22.507529   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:22.516406   15224 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1014 08:47:22.516406   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:22.516406   15224 round_trippers.go:580]     Audit-Id: a7b5ccb8-bb0d-4772-8475-adc01a709731
	I1014 08:47:22.516406   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:22.516406   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:22.516406   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:22.516406   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:22.516406   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:22 GMT
	I1014 08:47:22.516406   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"2001","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7046 chars]
	I1014 08:47:22.517152   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:22.517152   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:22.517152   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:22.517152   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:22.521970   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:22.521970   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:22.521970   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:22.521970   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:22.521970   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:22.521970   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:22 GMT
	I1014 08:47:22.522111   15224 round_trippers.go:580]     Audit-Id: 5ecc5f15-8d19-4bc2-9de5-40d471223401
	I1014 08:47:22.522111   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:22.522336   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:22.522494   15224 pod_ready.go:93] pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace has status "Ready":"True"
	I1014 08:47:22.522494   15224 pod_ready.go:82] duration metric: took 22.5161315s for pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace to be "Ready" ...
	I1014 08:47:22.522494   15224 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:47:22.522494   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-671000
	I1014 08:47:22.522494   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:22.522494   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:22.522494   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:22.526482   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:22.526482   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:22.526482   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:22.526482   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:22 GMT
	I1014 08:47:22.526482   15224 round_trippers.go:580]     Audit-Id: 6d042254-f6fa-4858-8857-13aec94cb0f3
	I1014 08:47:22.526482   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:22.526482   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:22.526482   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:22.526482   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-671000","namespace":"kube-system","uid":"098aece2-cb2c-470a-878a-872417e4387f","resourceVersion":"1933","creationTimestamp":"2024-10-14T15:46:17Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.106.123:2379","kubernetes.io/config.hash":"679486a8f27de5805bc2e87fb1920dce","kubernetes.io/config.mirror":"679486a8f27de5805bc2e87fb1920dce","kubernetes.io/config.seen":"2024-10-14T15:46:09.843414705Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6617 chars]
	I1014 08:47:22.527519   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:22.527695   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:22.527695   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:22.527767   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:22.529858   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:22.530635   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:22.530635   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:22.530635   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:22 GMT
	I1014 08:47:22.530635   15224 round_trippers.go:580]     Audit-Id: fc7b79b1-3b10-4b82-ab77-14c76b0685e4
	I1014 08:47:22.530635   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:22.530635   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:22.530635   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:22.530952   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:22.531075   15224 pod_ready.go:93] pod "etcd-multinode-671000" in "kube-system" namespace has status "Ready":"True"
	I1014 08:47:22.531075   15224 pod_ready.go:82] duration metric: took 8.5807ms for pod "etcd-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:47:22.531075   15224 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:47:22.531075   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-671000
	I1014 08:47:22.531075   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:22.531075   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:22.531075   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:22.537967   15224 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:47:22.537967   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:22.537967   15224 round_trippers.go:580]     Audit-Id: 077fd83a-3418-4449-8a48-818e72fe3586
	I1014 08:47:22.537967   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:22.537967   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:22.537967   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:22.537967   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:22.537967   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:22 GMT
	I1014 08:47:22.538728   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-671000","namespace":"kube-system","uid":"64595feb-e6e8-4e69-a4b7-6459d15e3beb","resourceVersion":"1925","creationTimestamp":"2024-10-14T15:46:15Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.106.123:8443","kubernetes.io/config.hash":"864b69f35cb25e9dd5d87a753a055a10","kubernetes.io/config.mirror":"864b69f35cb25e9dd5d87a753a055a10","kubernetes.io/config.seen":"2024-10-14T15:46:09.765946769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:46:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8049 chars]
	I1014 08:47:22.539331   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:22.539331   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:22.539331   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:22.539448   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:22.542179   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:22.542179   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:22.542179   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:22.542179   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:22 GMT
	I1014 08:47:22.542179   15224 round_trippers.go:580]     Audit-Id: c31189b1-3c1e-414b-9f82-d770e359bde5
	I1014 08:47:22.542179   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:22.542179   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:22.542179   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:22.542179   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:22.543224   15224 pod_ready.go:93] pod "kube-apiserver-multinode-671000" in "kube-system" namespace has status "Ready":"True"
	I1014 08:47:22.543349   15224 pod_ready.go:82] duration metric: took 12.236ms for pod "kube-apiserver-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:47:22.543349   15224 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:47:22.543439   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-671000
	I1014 08:47:22.543493   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:22.543529   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:22.543529   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:22.545619   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:22.545619   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:22.545619   15224 round_trippers.go:580]     Audit-Id: 4cfbb807-6017-4c01-87de-fdc47bd6c8d1
	I1014 08:47:22.545619   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:22.545619   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:22.545619   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:22.545619   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:22.545619   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:22 GMT
	I1014 08:47:22.545619   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-671000","namespace":"kube-system","uid":"a5c7bb80-c844-476f-ba47-1cd4e599b92d","resourceVersion":"1940","creationTimestamp":"2024-10-14T15:22:39Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b83e31864e0ae98d29d960866012ecb0","kubernetes.io/config.mirror":"b83e31864e0ae98d29d960866012ecb0","kubernetes.io/config.seen":"2024-10-14T15:22:39.775213119Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I1014 08:47:22.546767   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:22.546767   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:22.546767   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:22.546767   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:22.549114   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:22.549114   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:22.549114   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:22 GMT
	I1014 08:47:22.549114   15224 round_trippers.go:580]     Audit-Id: 8bfc765a-f250-4c33-9183-130700d1b585
	I1014 08:47:22.549114   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:22.549114   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:22.549114   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:22.549114   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:22.549114   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:22.549905   15224 pod_ready.go:93] pod "kube-controller-manager-multinode-671000" in "kube-system" namespace has status "Ready":"True"
	I1014 08:47:22.549941   15224 pod_ready.go:82] duration metric: took 6.5917ms for pod "kube-controller-manager-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:47:22.549941   15224 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kbpjf" in "kube-system" namespace to be "Ready" ...
	I1014 08:47:22.550056   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kbpjf
	I1014 08:47:22.550124   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:22.550124   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:22.550214   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:22.553070   15224 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1014 08:47:22.553070   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:22.553070   15224 round_trippers.go:580]     Audit-Id: c22a969d-5aec-4108-8f1c-d075493f0a49
	I1014 08:47:22.553070   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:22.553070   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:22.553070   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:22.553070   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:22.553070   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:22 GMT
	I1014 08:47:22.553070   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kbpjf","generateName":"kube-proxy-","namespace":"kube-system","uid":"004b7f38-fa3b-4c2c-9524-8d5b1ba514e9","resourceVersion":"1803","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e5217416-1ed2-4ea5-ae9b-ee7bcdba0d2a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e5217416-1ed2-4ea5-ae9b-ee7bcdba0d2a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6433 chars]
	I1014 08:47:22.554039   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:47:22.554188   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:22.554188   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:22.554188   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:22.556365   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:22.556884   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:22.556884   15224 round_trippers.go:580]     Audit-Id: e8490da7-4e4d-46a3-9830-9c188b304e0b
	I1014 08:47:22.556884   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:22.556884   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:22.556884   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:22.556884   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:22.556884   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:22 GMT
	I1014 08:47:22.557229   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"1990","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4584 chars]
	I1014 08:47:22.557495   15224 pod_ready.go:98] node "multinode-671000-m02" hosting pod "kube-proxy-kbpjf" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000-m02" has status "Ready":"Unknown"
	I1014 08:47:22.557495   15224 pod_ready.go:82] duration metric: took 7.5537ms for pod "kube-proxy-kbpjf" in "kube-system" namespace to be "Ready" ...
	E1014 08:47:22.557495   15224 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-671000-m02" hosting pod "kube-proxy-kbpjf" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000-m02" has status "Ready":"Unknown"
	I1014 08:47:22.557495   15224 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n6txs" in "kube-system" namespace to be "Ready" ...
	I1014 08:47:22.707761   15224 request.go:632] Waited for 150.266ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n6txs
	I1014 08:47:22.707761   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n6txs
	I1014 08:47:22.707761   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:22.708100   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:22.708100   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:22.712025   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:22.712025   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:22.712142   15224 round_trippers.go:580]     Audit-Id: e0a449eb-bf2a-481c-8ccf-efed27df1b24
	I1014 08:47:22.712142   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:22.712142   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:22.712142   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:22.712142   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:22.712142   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:22 GMT
	I1014 08:47:22.712487   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-n6txs","generateName":"kube-proxy-","namespace":"kube-system","uid":"796a44f9-2067-438d-9359-34d5f968c861","resourceVersion":"1784","creationTimestamp":"2024-10-14T15:30:35Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e5217416-1ed2-4ea5-ae9b-ee7bcdba0d2a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:30:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e5217416-1ed2-4ea5-ae9b-ee7bcdba0d2a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6428 chars]
	I1014 08:47:22.907393   15224 request.go:632] Waited for 194.0438ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.106.123:8443/api/v1/nodes/multinode-671000-m03
	I1014 08:47:22.907393   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000-m03
	I1014 08:47:22.907926   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:22.907926   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:22.907926   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:22.911879   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:22.911879   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:22.911879   15224 round_trippers.go:580]     Audit-Id: 440f6872-d332-4c7f-a3b4-eed3ef19f870
	I1014 08:47:22.911879   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:22.911879   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:22.911879   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:22.911879   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:22.911879   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:22 GMT
	I1014 08:47:22.912251   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m03","uid":"a7ea02fb-ac24-4430-adbc-9815c644cfa0","resourceVersion":"1897","creationTimestamp":"2024-10-14T15:41:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_41_35_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:41:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4303 chars]
	I1014 08:47:22.912794   15224 pod_ready.go:98] node "multinode-671000-m03" hosting pod "kube-proxy-n6txs" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000-m03" has status "Ready":"Unknown"
	I1014 08:47:22.912818   15224 pod_ready.go:82] duration metric: took 355.3229ms for pod "kube-proxy-n6txs" in "kube-system" namespace to be "Ready" ...
	E1014 08:47:22.912818   15224 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-671000-m03" hosting pod "kube-proxy-n6txs" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000-m03" has status "Ready":"Unknown"
	I1014 08:47:22.912934   15224 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r74dx" in "kube-system" namespace to be "Ready" ...
	I1014 08:47:23.107157   15224 request.go:632] Waited for 194.1465ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r74dx
	I1014 08:47:23.107157   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r74dx
	I1014 08:47:23.107157   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:23.107157   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:23.107157   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:23.112683   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:23.112683   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:23.112775   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:23.112775   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:23.112775   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:23 GMT
	I1014 08:47:23.112775   15224 round_trippers.go:580]     Audit-Id: a1b45df5-6598-4da5-9b3a-6a888f71aa39
	I1014 08:47:23.112775   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:23.112775   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:23.113207   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-r74dx","generateName":"kube-proxy-","namespace":"kube-system","uid":"f8d14473-8859-4015-84e9-d00656cc00c9","resourceVersion":"1856","creationTimestamp":"2024-10-14T15:22:44Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e5217416-1ed2-4ea5-ae9b-ee7bcdba0d2a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e5217416-1ed2-4ea5-ae9b-ee7bcdba0d2a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6405 chars]
	I1014 08:47:23.307122   15224 request.go:632] Waited for 193.0613ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:23.307122   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:23.307122   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:23.307122   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:23.307122   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:23.311228   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:23.312017   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:23.312017   15224 round_trippers.go:580]     Audit-Id: 48494c0d-e599-4956-8dd2-f606bb5be182
	I1014 08:47:23.312119   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:23.312200   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:23.312200   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:23.312230   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:23.312230   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:23 GMT
	I1014 08:47:23.312503   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:23.313162   15224 pod_ready.go:93] pod "kube-proxy-r74dx" in "kube-system" namespace has status "Ready":"True"
	I1014 08:47:23.313162   15224 pod_ready.go:82] duration metric: took 400.2274ms for pod "kube-proxy-r74dx" in "kube-system" namespace to be "Ready" ...
	I1014 08:47:23.313282   15224 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:47:23.507648   15224 request.go:632] Waited for 194.2842ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-671000
	I1014 08:47:23.507648   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-671000
	I1014 08:47:23.507648   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:23.507648   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:23.508208   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:23.512073   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:23.512138   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:23.512138   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:23 GMT
	I1014 08:47:23.512138   15224 round_trippers.go:580]     Audit-Id: 27b859b4-8ea0-4405-86d2-b7f06931ee6d
	I1014 08:47:23.512138   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:23.512138   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:23.512138   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:23.512138   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:23.512545   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-671000","namespace":"kube-system","uid":"97febcab-f54d-4338-ba7c-2dc5e69b77fc","resourceVersion":"1922","creationTimestamp":"2024-10-14T15:22:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3e987cfaedc75c39145e8fc131c60c81","kubernetes.io/config.mirror":"3e987cfaedc75c39145e8fc131c60c81","kubernetes.io/config.seen":"2024-10-14T15:22:32.104995089Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I1014 08:47:23.707644   15224 request.go:632] Waited for 194.4339ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:23.708118   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:23.708118   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:23.708118   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:23.708118   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:23.712120   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:23.712120   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:23.712231   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:23.712231   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:23.712231   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:23.712231   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:23.712231   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:23 GMT
	I1014 08:47:23.712231   15224 round_trippers.go:580]     Audit-Id: a0546db7-53b1-42b6-82b2-1ddac5257dfc
	I1014 08:47:23.712467   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:23.713024   15224 pod_ready.go:93] pod "kube-scheduler-multinode-671000" in "kube-system" namespace has status "Ready":"True"
	I1014 08:47:23.713119   15224 pod_ready.go:82] duration metric: took 399.7411ms for pod "kube-scheduler-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:47:23.713119   15224 pod_ready.go:39] duration metric: took 23.7174524s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 08:47:23.713185   15224 api_server.go:52] waiting for apiserver process to appear ...
	I1014 08:47:23.722066   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 08:47:23.749304   15224 command_runner.go:130] > a834664fc8b8
	I1014 08:47:23.749420   15224 logs.go:282] 1 containers: [a834664fc8b8]
	I1014 08:47:23.759882   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 08:47:23.786736   15224 command_runner.go:130] > 48c8492e231e
	I1014 08:47:23.786909   15224 logs.go:282] 1 containers: [48c8492e231e]
	I1014 08:47:23.796103   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 08:47:23.820348   15224 command_runner.go:130] > 5d223e2e64fc
	I1014 08:47:23.820902   15224 command_runner.go:130] > d9831e9f8ce8
	I1014 08:47:23.822183   15224 logs.go:282] 2 containers: [5d223e2e64fc d9831e9f8ce8]
	I1014 08:47:23.830412   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 08:47:23.853420   15224 command_runner.go:130] > d428685276e1
	I1014 08:47:23.854031   15224 command_runner.go:130] > 661e75bbf6b4
	I1014 08:47:23.854031   15224 logs.go:282] 2 containers: [d428685276e1 661e75bbf6b4]
	I1014 08:47:23.864582   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 08:47:23.897352   15224 command_runner.go:130] > e83db276dec3
	I1014 08:47:23.897949   15224 command_runner.go:130] > ea19428d7036
	I1014 08:47:23.897949   15224 logs.go:282] 2 containers: [e83db276dec3 ea19428d7036]
	I1014 08:47:23.907963   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 08:47:23.938100   15224 command_runner.go:130] > 8af48c446f7e
	I1014 08:47:23.938100   15224 command_runner.go:130] > 712aad669c9f
	I1014 08:47:23.938229   15224 logs.go:282] 2 containers: [8af48c446f7e 712aad669c9f]
	I1014 08:47:23.951769   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 08:47:23.978763   15224 command_runner.go:130] > bba035362eb9
	I1014 08:47:23.979792   15224 command_runner.go:130] > fcdf89a3ac8c
	I1014 08:47:23.979867   15224 logs.go:282] 2 containers: [bba035362eb9 fcdf89a3ac8c]
	I1014 08:47:23.979990   15224 logs.go:123] Gathering logs for kube-proxy [ea19428d7036] ...
	I1014 08:47:23.980053   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea19428d7036"
	I1014 08:47:24.011154   15224 command_runner.go:130] ! I1014 15:22:47.466748       1 server_linux.go:66] "Using iptables proxy"
	I1014 08:47:24.011220   15224 command_runner.go:130] ! E1014 15:22:47.511018       1 proxier.go:734] "Error cleaning up nftables rules" err=<
	I1014 08:47:24.011220   15224 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I1014 08:47:24.011220   15224 command_runner.go:130] ! 	add table ip kube-proxy
	I1014 08:47:24.011220   15224 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I1014 08:47:24.011220   15224 command_runner.go:130] !  >
	I1014 08:47:24.011220   15224 command_runner.go:130] ! E1014 15:22:47.546236       1 proxier.go:734] "Error cleaning up nftables rules" err=<
	I1014 08:47:24.011220   15224 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I1014 08:47:24.011220   15224 command_runner.go:130] ! 	add table ip6 kube-proxy
	I1014 08:47:24.011220   15224 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I1014 08:47:24.011220   15224 command_runner.go:130] !  >
	I1014 08:47:24.011220   15224 command_runner.go:130] ! I1014 15:22:47.606437       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.20.100.167"]
	I1014 08:47:24.011220   15224 command_runner.go:130] ! E1014 15:22:47.606505       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 08:47:24.011220   15224 command_runner.go:130] ! I1014 15:22:47.788755       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 08:47:24.011220   15224 command_runner.go:130] ! I1014 15:22:47.788820       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 08:47:24.011220   15224 command_runner.go:130] ! I1014 15:22:47.788852       1 server_linux.go:169] "Using iptables Proxier"
	I1014 08:47:24.011220   15224 command_runner.go:130] ! I1014 15:22:47.807050       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 08:47:24.011220   15224 command_runner.go:130] ! I1014 15:22:47.807424       1 server.go:483] "Version info" version="v1.31.1"
	I1014 08:47:24.011220   15224 command_runner.go:130] ! I1014 15:22:47.807440       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:24.011220   15224 command_runner.go:130] ! I1014 15:22:47.810361       1 config.go:199] "Starting service config controller"
	I1014 08:47:24.011220   15224 command_runner.go:130] ! I1014 15:22:47.810416       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 08:47:24.011220   15224 command_runner.go:130] ! I1014 15:22:47.810600       1 config.go:105] "Starting endpoint slice config controller"
	I1014 08:47:24.011220   15224 command_runner.go:130] ! I1014 15:22:47.810629       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 08:47:24.011220   15224 command_runner.go:130] ! I1014 15:22:47.811297       1 config.go:328] "Starting node config controller"
	I1014 08:47:24.011820   15224 command_runner.go:130] ! I1014 15:22:47.811309       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 08:47:24.011820   15224 command_runner.go:130] ! I1014 15:22:47.911443       1 shared_informer.go:320] Caches are synced for node config
	I1014 08:47:24.011820   15224 command_runner.go:130] ! I1014 15:22:47.911791       1 shared_informer.go:320] Caches are synced for service config
	I1014 08:47:24.011820   15224 command_runner.go:130] ! I1014 15:22:47.911866       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 08:47:24.016336   15224 logs.go:123] Gathering logs for kube-controller-manager [8af48c446f7e] ...
	I1014 08:47:24.016336   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af48c446f7e"
	I1014 08:47:24.047920   15224 command_runner.go:130] ! I1014 15:46:12.989235       1 serving.go:386] Generated self-signed cert in-memory
	I1014 08:47:24.048096   15224 command_runner.go:130] ! I1014 15:46:13.820617       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I1014 08:47:24.048096   15224 command_runner.go:130] ! I1014 15:46:13.820897       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:24.048273   15224 command_runner.go:130] ! I1014 15:46:13.823101       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1014 08:47:24.048344   15224 command_runner.go:130] ! I1014 15:46:13.823494       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 08:47:24.048344   15224 command_runner.go:130] ! I1014 15:46:13.824132       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1014 08:47:24.048914   15224 command_runner.go:130] ! I1014 15:46:13.824214       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 08:47:24.048914   15224 command_runner.go:130] ! I1014 15:46:17.208145       1 controllermanager.go:797] "Started controller" controller="serviceaccount-token-controller"
	I1014 08:47:24.049324   15224 command_runner.go:130] ! I1014 15:46:17.211496       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I1014 08:47:24.049324   15224 command_runner.go:130] ! I1014 15:46:17.268813       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I1014 08:47:24.049917   15224 command_runner.go:130] ! I1014 15:46:17.269727       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I1014 08:47:24.050720   15224 command_runner.go:130] ! I1014 15:46:17.270864       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I1014 08:47:24.050720   15224 command_runner.go:130] ! I1014 15:46:17.271094       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I1014 08:47:24.050786   15224 command_runner.go:130] ! I1014 15:46:17.271857       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I1014 08:47:24.050786   15224 command_runner.go:130] ! I1014 15:46:17.271962       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I1014 08:47:24.050786   15224 command_runner.go:130] ! I1014 15:46:17.272049       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I1014 08:47:24.050786   15224 command_runner.go:130] ! I1014 15:46:17.272075       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I1014 08:47:24.050786   15224 command_runner.go:130] ! I1014 15:46:17.273540       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I1014 08:47:24.051348   15224 command_runner.go:130] ! I1014 15:46:17.274245       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I1014 08:47:24.051401   15224 command_runner.go:130] ! I1014 15:46:17.274579       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I1014 08:47:24.051401   15224 command_runner.go:130] ! I1014 15:46:17.274747       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.274772       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.275348       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.275380       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.275397       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.275571       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.275603       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! W1014 15:46:17.275618       1 shared_informer.go:597] resyncPeriod 13h32m18.096579392s is smaller than resyncCheckPeriod 20h55m54.648340273s and the informer has already started. Changing it to 20h55m54.648340273s
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.276096       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.276150       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.276197       1 controllermanager.go:797] "Started controller" controller="resourcequota-controller"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.276213       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.276260       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.276359       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.283642       1 controllermanager.go:797] "Started controller" controller="replicaset-controller"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.284697       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.284913       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.288417       1 controllermanager.go:797] "Started controller" controller="cronjob-controller"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.289073       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.289091       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.292212       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-approving-controller"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.292573       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.292591       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.295276       1 controllermanager.go:797] "Started controller" controller="clusterrole-aggregation-controller"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.295785       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.298756       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.299107       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I1014 08:47:24.052023   15224 command_runner.go:130] ! I1014 15:46:17.299997       1 controllermanager.go:797] "Started controller" controller="endpointslice-mirroring-controller"
	I1014 08:47:24.052023   15224 command_runner.go:130] ! I1014 15:46:17.302040       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I1014 08:47:24.052090   15224 command_runner.go:130] ! I1014 15:46:17.302058       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I1014 08:47:24.052090   15224 command_runner.go:130] ! I1014 15:46:17.305668       1 controllermanager.go:797] "Started controller" controller="replicationcontroller-controller"
	I1014 08:47:24.052090   15224 command_runner.go:130] ! I1014 15:46:17.308801       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I1014 08:47:24.052090   15224 command_runner.go:130] ! I1014 15:46:17.308819       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I1014 08:47:24.052090   15224 command_runner.go:130] ! I1014 15:46:17.318320       1 shared_informer.go:320] Caches are synced for tokens
	I1014 08:47:24.052090   15224 command_runner.go:130] ! I1014 15:46:17.329856       1 controllermanager.go:797] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I1014 08:47:24.052194   15224 command_runner.go:130] ! I1014 15:46:17.330990       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I1014 08:47:24.052194   15224 command_runner.go:130] ! I1014 15:46:17.331395       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I1014 08:47:24.052194   15224 command_runner.go:130] ! I1014 15:46:17.345566       1 controllermanager.go:797] "Started controller" controller="disruption-controller"
	I1014 08:47:24.052268   15224 command_runner.go:130] ! I1014 15:46:17.345806       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I1014 08:47:24.052268   15224 command_runner.go:130] ! I1014 15:46:17.345841       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I1014 08:47:24.052342   15224 command_runner.go:130] ! I1014 15:46:17.345937       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I1014 08:47:24.052342   15224 command_runner.go:130] ! E1014 15:46:17.350088       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I1014 08:47:24.052342   15224 command_runner.go:130] ! I1014 15:46:17.350237       1 controllermanager.go:775] "Warning: skipping controller" controller="service-lb-controller"
	I1014 08:47:24.052342   15224 command_runner.go:130] ! I1014 15:46:17.350277       1 controllermanager.go:775] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I1014 08:47:24.052342   15224 command_runner.go:130] ! I1014 15:46:17.359040       1 controllermanager.go:797] "Started controller" controller="endpoints-controller"
	I1014 08:47:24.052342   15224 command_runner.go:130] ! I1014 15:46:17.360243       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I1014 08:47:24.052342   15224 command_runner.go:130] ! I1014 15:46:17.360265       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I1014 08:47:24.052342   15224 command_runner.go:130] ! I1014 15:46:17.362115       1 controllermanager.go:797] "Started controller" controller="pod-garbage-collector-controller"
	I1014 08:47:24.052466   15224 command_runner.go:130] ! I1014 15:46:17.362235       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I1014 08:47:24.052466   15224 command_runner.go:130] ! I1014 15:46:17.362245       1 shared_informer.go:313] Waiting for caches to sync for GC
	I1014 08:47:24.052466   15224 command_runner.go:130] ! I1014 15:46:17.364537       1 controllermanager.go:797] "Started controller" controller="serviceaccount-controller"
	I1014 08:47:24.052515   15224 command_runner.go:130] ! I1014 15:46:17.364725       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I1014 08:47:24.052515   15224 command_runner.go:130] ! I1014 15:46:17.364738       1 shared_informer.go:313] Waiting for caches to sync for service account
	I1014 08:47:24.052515   15224 command_runner.go:130] ! I1014 15:46:17.367152       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I1014 08:47:24.052515   15224 command_runner.go:130] ! I1014 15:46:17.367373       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I1014 08:47:24.052585   15224 command_runner.go:130] ! I1014 15:46:17.369619       1 controllermanager.go:797] "Started controller" controller="bootstrap-signer-controller"
	I1014 08:47:24.052613   15224 command_runner.go:130] ! I1014 15:46:17.370097       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I1014 08:47:24.052613   15224 command_runner.go:130] ! I1014 15:46:17.373109       1 controllermanager.go:797] "Started controller" controller="token-cleaner-controller"
	I1014 08:47:24.052647   15224 command_runner.go:130] ! I1014 15:46:17.373475       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I1014 08:47:24.053271   15224 command_runner.go:130] ! I1014 15:46:17.373486       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I1014 08:47:24.053271   15224 command_runner.go:130] ! I1014 15:46:17.373493       1 shared_informer.go:320] Caches are synced for token_cleaner
	I1014 08:47:24.053362   15224 command_runner.go:130] ! I1014 15:46:17.375506       1 controllermanager.go:797] "Started controller" controller="ttl-after-finished-controller"
	I1014 08:47:24.053394   15224 command_runner.go:130] ! I1014 15:46:17.375684       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I1014 08:47:24.053394   15224 command_runner.go:130] ! I1014 15:46:17.375694       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I1014 08:47:24.053465   15224 command_runner.go:130] ! I1014 15:46:17.379552       1 controllermanager.go:797] "Started controller" controller="ephemeral-volume-controller"
	I1014 08:47:24.053494   15224 command_runner.go:130] ! I1014 15:46:17.380063       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I1014 08:47:24.053494   15224 command_runner.go:130] ! I1014 15:46:17.380270       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I1014 08:47:24.053561   15224 command_runner.go:130] ! I1014 15:46:17.413079       1 controllermanager.go:797] "Started controller" controller="namespace-controller"
	I1014 08:47:24.053561   15224 command_runner.go:130] ! I1014 15:46:17.413676       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I1014 08:47:24.053561   15224 command_runner.go:130] ! I1014 15:46:17.415689       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I1014 08:47:24.053561   15224 command_runner.go:130] ! I1014 15:46:17.418729       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I1014 08:47:24.053561   15224 command_runner.go:130] ! I1014 15:46:17.418858       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I1014 08:47:24.053561   15224 command_runner.go:130] ! I1014 15:46:17.418983       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:24.053561   15224 command_runner.go:130] ! I1014 15:46:17.420448       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-signing-controller"
	I1014 08:47:24.053561   15224 command_runner.go:130] ! I1014 15:46:17.420573       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I1014 08:47:24.053561   15224 command_runner.go:130] ! I1014 15:46:17.420658       1 controllermanager.go:775] "Warning: skipping controller" controller="node-route-controller"
	I1014 08:47:24.053561   15224 command_runner.go:130] ! I1014 15:46:17.420878       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I1014 08:47:24.053561   15224 command_runner.go:130] ! I1014 15:46:17.422022       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I1014 08:47:24.053561   15224 command_runner.go:130] ! I1014 15:46:17.422169       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I1014 08:47:24.054315   15224 command_runner.go:130] ! I1014 15:46:17.422636       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I1014 08:47:24.054315   15224 command_runner.go:130] ! I1014 15:46:17.425521       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:24.055014   15224 command_runner.go:130] ! I1014 15:46:17.425557       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I1014 08:47:24.055014   15224 command_runner.go:130] ! I1014 15:46:17.425747       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I1014 08:47:24.055091   15224 command_runner.go:130] ! I1014 15:46:17.425569       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:24.055154   15224 command_runner.go:130] ! I1014 15:46:17.425577       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:24.055195   15224 command_runner.go:130] ! E1014 15:46:17.429609       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I1014 08:47:24.055195   15224 command_runner.go:130] ! I1014 15:46:17.429771       1 controllermanager.go:775] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I1014 08:47:24.055195   15224 command_runner.go:130] ! I1014 15:46:17.432720       1 controllermanager.go:797] "Started controller" controller="persistentvolume-attach-detach-controller"
	I1014 08:47:24.055247   15224 command_runner.go:130] ! I1014 15:46:17.433242       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I1014 08:47:24.055247   15224 command_runner.go:130] ! I1014 15:46:17.433509       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I1014 08:47:24.055298   15224 command_runner.go:130] ! I1014 15:46:17.437867       1 controllermanager.go:797] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I1014 08:47:24.055298   15224 command_runner.go:130] ! I1014 15:46:17.438432       1 pvc_protection_controller.go:105] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.438754       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.466996       1 controllermanager.go:797] "Started controller" controller="garbage-collector-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.467178       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.467191       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.467211       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.513974       1 controllermanager.go:797] "Started controller" controller="daemonset-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.514092       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.514103       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.612272       1 controllermanager.go:797] "Started controller" controller="deployment-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.612390       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.612405       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.715625       1 controllermanager.go:797] "Started controller" controller="ttl-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.718491       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.718512       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.762259       1 node_lifecycle_controller.go:430] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.762792       1 controllermanager.go:797] "Started controller" controller="node-lifecycle-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.763108       1 node_lifecycle_controller.go:464] "Sending events to api server" logger="node-lifecycle-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.763488       1 node_lifecycle_controller.go:475] "Starting node controller" logger="node-lifecycle-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.763636       1 shared_informer.go:313] Waiting for caches to sync for taint
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.815269       1 controllermanager.go:797] "Started controller" controller="persistentvolume-expander-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.815926       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.815820       1 expand_controller.go:328] "Starting expand controller" logger="persistentvolume-expander-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.815981       1 shared_informer.go:313] Waiting for caches to sync for expand
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.865803       1 controllermanager.go:797] "Started controller" controller="taint-eviction-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.865833       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.865908       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.865945       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.865986       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.923932       1 controllermanager.go:797] "Started controller" controller="job-controller"
	I1014 08:47:24.056024   15224 command_runner.go:130] ! I1014 15:46:17.924153       1 job_controller.go:226] "Starting job controller" logger="job-controller"
	I1014 08:47:24.056024   15224 command_runner.go:130] ! I1014 15:46:17.924184       1 shared_informer.go:313] Waiting for caches to sync for job
	I1014 08:47:24.056081   15224 command_runner.go:130] ! I1014 15:46:17.978728       1 controllermanager.go:797] "Started controller" controller="root-ca-certificate-publisher-controller"
	I1014 08:47:24.056081   15224 command_runner.go:130] ! I1014 15:46:17.978796       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I1014 08:47:24.056134   15224 command_runner.go:130] ! I1014 15:46:17.978809       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I1014 08:47:24.056162   15224 command_runner.go:130] ! I1014 15:46:18.018003       1 controllermanager.go:797] "Started controller" controller="endpointslice-controller"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.018177       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.018192       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.077409       1 controllermanager.go:797] "Started controller" controller="statefulset-controller"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.078007       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.078026       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.245465       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.246368       1 controllermanager.go:797] "Started controller" controller="node-ipam-controller"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.246712       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.246910       1 shared_informer.go:313] Waiting for caches to sync for node
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.264869       1 controllermanager.go:797] "Started controller" controller="persistentvolume-binder-controller"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.264984       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.266232       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.321121       1 controllermanager.go:797] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.323482       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.323903       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.431796       1 controllermanager.go:797] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.431873       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.465851       1 controllermanager.go:797] "Started controller" controller="persistentvolume-protection-controller"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.468767       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.469028       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.485571       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.534720       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.539015       1 shared_informer.go:320] Caches are synced for PVC protection
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.541399       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000\" does not exist"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.541615       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m02\" does not exist"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.549102       1 shared_informer.go:320] Caches are synced for TTL
	I1014 08:47:24.056742   15224 command_runner.go:130] ! I1014 15:46:18.549549       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m03\" does not exist"
	I1014 08:47:24.056791   15224 command_runner.go:130] ! I1014 15:46:18.550590       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1014 08:47:24.056791   15224 command_runner.go:130] ! I1014 15:46:18.551387       1 shared_informer.go:320] Caches are synced for node
	I1014 08:47:24.056791   15224 command_runner.go:130] ! I1014 15:46:18.554673       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1014 08:47:24.056791   15224 command_runner.go:130] ! I1014 15:46:18.557592       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1014 08:47:24.056791   15224 command_runner.go:130] ! I1014 15:46:18.558471       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1014 08:47:24.056893   15224 command_runner.go:130] ! I1014 15:46:18.558669       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1014 08:47:24.056893   15224 command_runner.go:130] ! I1014 15:46:18.559066       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:24.056893   15224 command_runner.go:130] ! I1014 15:46:18.559166       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.057121   15224 command_runner.go:130] ! I1014 15:46:18.559144       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.560823       1 shared_informer.go:320] Caches are synced for endpoint
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.563147       1 shared_informer.go:320] Caches are synced for GC
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.566072       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.566447       1 shared_informer.go:320] Caches are synced for persistent volume
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.566267       1 shared_informer.go:320] Caches are synced for service account
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.570369       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.570522       1 shared_informer.go:320] Caches are synced for PV protection
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.577368       1 shared_informer.go:320] Caches are synced for TTL after finished
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.580187       1 shared_informer.go:320] Caches are synced for crt configmap
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.580534       1 shared_informer.go:320] Caches are synced for ephemeral
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.585372       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.593972       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.595014       1 shared_informer.go:320] Caches are synced for cronjob
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.600012       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.602930       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.609680       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.613447       1 shared_informer.go:320] Caches are synced for deployment
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.616246       1 shared_informer.go:320] Caches are synced for expand
	I1014 08:47:24.057808   15224 command_runner.go:130] ! I1014 15:46:18.616739       1 shared_informer.go:320] Caches are synced for namespace
	I1014 08:47:24.057883   15224 command_runner.go:130] ! I1014 15:46:18.618534       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1014 08:47:24.057883   15224 command_runner.go:130] ! I1014 15:46:18.625249       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1014 08:47:24.057883   15224 command_runner.go:130] ! I1014 15:46:18.630423       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1014 08:47:24.057883   15224 command_runner.go:130] ! I1014 15:46:18.632938       1 shared_informer.go:320] Caches are synced for HPA
	I1014 08:47:24.057959   15224 command_runner.go:130] ! I1014 15:46:18.633193       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1014 08:47:24.057959   15224 command_runner.go:130] ! I1014 15:46:18.634381       1 shared_informer.go:320] Caches are synced for job
	I1014 08:47:24.057959   15224 command_runner.go:130] ! I1014 15:46:18.634623       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1014 08:47:24.057959   15224 command_runner.go:130] ! I1014 15:46:18.634920       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1014 08:47:24.057959   15224 command_runner.go:130] ! I1014 15:46:18.649619       1 shared_informer.go:320] Caches are synced for disruption
	I1014 08:47:24.058035   15224 command_runner.go:130] ! I1014 15:46:18.668155       1 shared_informer.go:320] Caches are synced for taint
	I1014 08:47:24.058035   15224 command_runner.go:130] ! I1014 15:46:18.670026       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1014 08:47:24.058035   15224 command_runner.go:130] ! I1014 15:46:18.680357       1 shared_informer.go:320] Caches are synced for stateful set
	I1014 08:47:24.058103   15224 command_runner.go:130] ! I1014 15:46:18.700582       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:24.058129   15224 command_runner.go:130] ! I1014 15:46:18.708812       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:18.714134       1 shared_informer.go:320] Caches are synced for daemon sets
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:18.718536       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:18.718841       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000-m02"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:18.719036       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000-m03"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:18.721210       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="135.448763ms"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:18.721514       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="40.1µs"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:18.721809       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="136.173363ms"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:18.722033       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.5µs"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:18.722234       1 node_lifecycle_controller.go:1036] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:18.777385       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:18.786812       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:18.833914       1 shared_informer.go:320] Caches are synced for attach detach
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:19.252391       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:19.267855       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:19.268119       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:59.871635       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:59.892163       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:47:03.736416       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:47:13.821153       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:47:20.979721       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="114.5µs"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:47:22.061324       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.05527ms"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:47:22.062652       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.8µs"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:47:22.098955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="28.422114ms"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:47:22.099794       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="313.699µs"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:47:23.920002       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.074857   15224 logs.go:123] Gathering logs for Docker ...
	I1014 08:47:24.074857   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 08:47:24.107429   15224 command_runner.go:130] > Oct 14 15:44:44 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1014 08:47:24.107429   15224 command_runner.go:130] > Oct 14 15:44:44 minikube cri-dockerd[221]: time="2024-10-14T15:44:44Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I1014 08:47:24.107429   15224 command_runner.go:130] > Oct 14 15:44:44 minikube cri-dockerd[221]: time="2024-10-14T15:44:44Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1014 08:47:24.107429   15224 command_runner.go:130] > Oct 14 15:44:44 minikube cri-dockerd[221]: time="2024-10-14T15:44:44Z" level=info msg="Start docker client with request timeout 0s"
	I1014 08:47:24.108423   15224 command_runner.go:130] > Oct 14 15:44:44 minikube cri-dockerd[221]: time="2024-10-14T15:44:44Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1014 08:47:24.108456   15224 command_runner.go:130] > Oct 14 15:44:45 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1014 08:47:24.108585   15224 command_runner.go:130] > Oct 14 15:44:45 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1014 08:47:24.108585   15224 command_runner.go:130] > Oct 14 15:44:45 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:47 minikube cri-dockerd[413]: time="2024-10-14T15:44:47Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:47 minikube cri-dockerd[413]: time="2024-10-14T15:44:47Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:47 minikube cri-dockerd[413]: time="2024-10-14T15:44:47Z" level=info msg="Start docker client with request timeout 0s"
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:47 minikube cri-dockerd[413]: time="2024-10-14T15:44:47Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:49 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:49 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:49 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:50 minikube cri-dockerd[421]: time="2024-10-14T15:44:50Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:50 minikube cri-dockerd[421]: time="2024-10-14T15:44:50Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:50 minikube cri-dockerd[421]: time="2024-10-14T15:44:50Z" level=info msg="Start docker client with request timeout 0s"
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:50 minikube cri-dockerd[421]: time="2024-10-14T15:44:50Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:50 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:50 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:50 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:52 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:52 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:52 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:52 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:52 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:45:33 multinode-671000 systemd[1]: Starting Docker Application Container Engine...
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:45:33 multinode-671000 dockerd[649]: time="2024-10-14T15:45:33.956984837Z" level=info msg="Starting up"
	I1014 08:47:24.109180   15224 command_runner.go:130] > Oct 14 15:45:33 multinode-671000 dockerd[649]: time="2024-10-14T15:45:33.957924243Z" level=info msg="containerd not running, starting managed containerd"
	I1014 08:47:24.109232   15224 command_runner.go:130] > Oct 14 15:45:33 multinode-671000 dockerd[649]: time="2024-10-14T15:45:33.959335951Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=655
	I1014 08:47:24.109232   15224 command_runner.go:130] > Oct 14 15:45:33 multinode-671000 dockerd[655]: time="2024-10-14T15:45:33.994773864Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	I1014 08:47:24.109232   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.020772213Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I1014 08:47:24.109310   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.021015015Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.021095615Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.021147816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.021828519Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.021976120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.022248222Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.022376622Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.022401523Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.022414623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.023030126Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.023715230Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.027058949Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.027212250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.027346050Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.027434351Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.028070055Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.028254556Z" level=info msg="metadata content store policy set" policy=shared
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.033722086Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.033900187Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.033927888Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.033944088Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.033959488Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.034029088Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.034638992Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.034898493Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I1014 08:47:24.109903   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.034993394Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I1014 08:47:24.109954   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035025394Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I1014 08:47:24.109954   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035042394Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.110051   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035056394Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.110051   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035070894Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035091294Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035125794Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035139394Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035152195Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035200795Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035227495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035242395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035255095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035268595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035283595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035296895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035314495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035330096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035343596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035364096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035376796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035388896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035401196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035419096Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035441896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035454496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035465896Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035512897Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035554297Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035568497Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035580597Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I1014 08:47:24.110755   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035590797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110818   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035604297Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I1014 08:47:24.110818   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035619397Z" level=info msg="NRI interface is disabled by configuration."
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035934999Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.036229901Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.036295501Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.036322201Z" level=info msg="containerd successfully booted in 0.043787s"
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.016752326Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.204043816Z" level=info msg="Loading containers: start."
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.545951324Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.688138626Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.780023455Z" level=info msg="Loading containers: done."
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.809569125Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.809610125Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.809633825Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.810490930Z" level=info msg="Daemon has completed initialization"
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.853736479Z" level=info msg="API listen on /var/run/docker.sock"
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.854139881Z" level=info msg="API listen on [::]:2376"
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 systemd[1]: Started Docker Application Container Engine.
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 systemd[1]: Stopping Docker Application Container Engine...
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 dockerd[649]: time="2024-10-14T15:46:01.049459779Z" level=info msg="Processing signal 'terminated'"
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 dockerd[649]: time="2024-10-14T15:46:01.053392981Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 dockerd[649]: time="2024-10-14T15:46:01.053568081Z" level=info msg="Daemon shutdown complete"
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 dockerd[649]: time="2024-10-14T15:46:01.053889681Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 dockerd[649]: time="2024-10-14T15:46:01.054172781Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 systemd[1]: docker.service: Deactivated successfully.
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 systemd[1]: Stopped Docker Application Container Engine.
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 systemd[1]: Starting Docker Application Container Engine...
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:02.109177376Z" level=info msg="Starting up"
	I1014 08:47:24.111429   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:02.110667577Z" level=info msg="containerd not running, starting managed containerd"
	I1014 08:47:24.111499   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:02.112008177Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1093
	I1014 08:47:24.111499   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.143199292Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	I1014 08:47:24.111594   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168149004Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I1014 08:47:24.111594   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168197704Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168231304Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168244704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168266504Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168317904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168445004Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168531404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168550204Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168561104Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168583904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168690904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.171907506Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172002906Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172175606Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172377606Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172424606Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172461506Z" level=info msg="metadata content store policy set" policy=shared
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172795106Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172882406Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172902406Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I1014 08:47:24.112281   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172916306Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I1014 08:47:24.112281   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172930506Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I1014 08:47:24.112397   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172992206Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I1014 08:47:24.112468   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173380806Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I1014 08:47:24.112468   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173626906Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173734806Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173758306Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173794906Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173849506Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173864606Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173878206Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173900507Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173916207Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173928607Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173940507Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173959407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173973007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173985207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173998307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174010307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174023407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174035407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174047207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174077107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174095807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174107607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174191507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174206607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174229207Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174259307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.113105   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174352207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.113105   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174370407Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I1014 08:47:24.113173   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174499607Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I1014 08:47:24.113242   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174541907Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I1014 08:47:24.113242   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174556007Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I1014 08:47:24.113420   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174568207Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I1014 08:47:24.113420   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174578207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.113530   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174598407Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I1014 08:47:24.113530   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174612107Z" level=info msg="NRI interface is disabled by configuration."
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174893107Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.175192307Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.175271607Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.175364007Z" level=info msg="containerd successfully booted in 0.032943s"
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.157176768Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.188626383Z" level=info msg="Loading containers: start."
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.419822091Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.533275144Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.631380390Z" level=info msg="Loading containers: done."
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.656005002Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.656245502Z" level=info msg="Daemon has completed initialization"
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.695426820Z" level=info msg="API listen on /var/run/docker.sock"
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 systemd[1]: Started Docker Application Container Engine.
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.695638120Z" level=info msg="API listen on [::]:2376"
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Start docker client with request timeout 0s"
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Loaded network plugin cni"
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Setting cgroupDriver cgroupfs"
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Start cri-dockerd grpc backend"
	I1014 08:47:24.114137   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1014 08:47:24.114137   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:09Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7c65d6cfc9-fs9ct_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"2f8cc9a218fef4446839b0253827fc2dd89f8eccbd66ab7f0e5123c033f0aacc\""
	I1014 08:47:24.114249   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:09Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-7dff88458-vlp7j_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"06e529266db4b2cf3ad09285cddeb89d7d11e858b5156cf73f41198c9500b9f2\""
	I1014 08:47:24.114362   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.635579177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.114362   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.635817077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.114362   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.635919877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.114362   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.636083677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.114467   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.762883836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.114467   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.763092036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.114467   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.763114536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.114568   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.765440937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.114568   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1d3033f871fb11cb3095bcf5c5d43615de9685372a45edf226fe52b2f482bc71/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:24.114568   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.846488476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.114659   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.847106376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.114659   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.847254676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.114736   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.854373579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.114736   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.883112593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.114815   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.884477393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.116908   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.884514293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.116965   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.884605993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.117054   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6155e8be2d5d725a4259a45fe10f7ceb3fc581d528a6486633b563a59f331127/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:24.117149   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7bd4c36606eefc91e9ae07ea5683536fc78fdb6f7f752f44d28787b88540a878/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:24.117209   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.061102976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.117304   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.061201476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.117382   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.061221876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.061393176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0697a11790e806fa4d679e3d97be4c7193692d1cb8f76d882cf3a75aa8e0c238/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.312465294Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.312610794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.312646494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.312762794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.422697746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.422797746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.422816346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.423001046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.500801282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.501016583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.501037383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.504117984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:15Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.472267615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.472571215Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.472597215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.472873315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.475833517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.476013017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.117976   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.476107917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.117976   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.476358717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.117976   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.515050835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.118115   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.515249635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.515393835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.515565835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6f8bdf552734e59df94d489a081585cdaeeaee729f0a0ae92105117d4d744f3f/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cdcdd532ba1369fe0e866b321f18e0bd99a88a2699c325eea17a070d48f2f024/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.911588321Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.913177522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.913368522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.914060722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7bcadf1f0885ff3411b477a64c6af366c77eb264ce7a761f174b079b11e2e703/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.063841193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.063929693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.063946093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.064242693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.206735160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.207544260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.208633061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.224429668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:47.556424473Z" level=info msg="ignoring event" container=c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:47.559859508Z" level=info msg="shim disconnected" id=c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6 namespace=moby
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:47.560270512Z" level=warning msg="cleaning up after shim disconnected" id=c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6 namespace=moby
	I1014 08:47:24.118713   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:47.560505714Z" level=info msg="cleaning up dead shim" namespace=moby
	I1014 08:47:24.118713   15224 command_runner.go:130] > Oct 14 15:47:03 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:03.070959923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.118713   15224 command_runner.go:130] > Oct 14 15:47:03 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:03.071176624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.118713   15224 command_runner.go:130] > Oct 14 15:47:03 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:03.071240924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.118871   15224 command_runner.go:130] > Oct 14 15:47:03 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:03.071756926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.118934   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.071716036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.118988   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.071943436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.119082   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.071968036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.119110   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.072116937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.119178   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:47:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/429b989a1a986d23a2e5aee0de1aef1e683a014bebb587981622bd80a3ac5221/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:24.119178   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.295865797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.119178   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.295993998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.119178   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.296019698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.119178   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.296117898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.119178   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:47:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9092d17516eb35243fd461a360605e738727838ee50f870f3bd6c290fd061d20/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I1014 08:47:24.119178   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.536751498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.119178   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.537062099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.119178   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.537100499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.119178   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.537246499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.119178   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.821494873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.119178   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.821592273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.119178   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.821611273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.119178   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.821730874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.150676   15224 logs.go:123] Gathering logs for dmesg ...
	I1014 08:47:24.150676   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 08:47:24.173686   15224 command_runner.go:130] > [Oct14 15:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I1014 08:47:24.173686   15224 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I1014 08:47:24.173686   15224 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I1014 08:47:24.173686   15224 command_runner.go:130] > [  +0.121183] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I1014 08:47:24.173686   15224 command_runner.go:130] > [  +0.024192] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I1014 08:47:24.173686   15224 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I1014 08:47:24.173686   15224 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I1014 08:47:24.173686   15224 command_runner.go:130] > [  +0.058588] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I1014 08:47:24.173686   15224 command_runner.go:130] > [  +0.021951] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I1014 08:47:24.173686   15224 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I1014 08:47:24.173686   15224 command_runner.go:130] > [  +5.764502] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I1014 08:47:24.173686   15224 command_runner.go:130] > [  +0.701221] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I1014 08:47:24.173686   15224 command_runner.go:130] > [  +1.823727] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I1014 08:47:24.173686   15224 command_runner.go:130] > [  +7.351082] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I1014 08:47:24.173686   15224 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I1014 08:47:24.173686   15224 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I1014 08:47:24.174542   15224 command_runner.go:130] > [Oct14 15:45] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	I1014 08:47:24.174542   15224 command_runner.go:130] > [  +0.175163] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	I1014 08:47:24.174542   15224 command_runner.go:130] > [ +26.061812] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	I1014 08:47:24.174542   15224 command_runner.go:130] > [  +0.098944] kauditd_printk_skb: 71 callbacks suppressed
	I1014 08:47:24.174542   15224 command_runner.go:130] > [  +0.531295] systemd-fstab-generator[1053]: Ignoring "noauto" option for root device
	I1014 08:47:24.174542   15224 command_runner.go:130] > [Oct14 15:46] systemd-fstab-generator[1065]: Ignoring "noauto" option for root device
	I1014 08:47:24.174542   15224 command_runner.go:130] > [  +0.229472] systemd-fstab-generator[1079]: Ignoring "noauto" option for root device
	I1014 08:47:24.174542   15224 command_runner.go:130] > [  +2.943333] systemd-fstab-generator[1310]: Ignoring "noauto" option for root device
	I1014 08:47:24.174542   15224 command_runner.go:130] > [  +0.192845] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	I1014 08:47:24.174542   15224 command_runner.go:130] > [  +0.209914] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	I1014 08:47:24.174542   15224 command_runner.go:130] > [  +0.290916] systemd-fstab-generator[1348]: Ignoring "noauto" option for root device
	I1014 08:47:24.174542   15224 command_runner.go:130] > [  +0.928050] systemd-fstab-generator[1473]: Ignoring "noauto" option for root device
	I1014 08:47:24.174542   15224 command_runner.go:130] > [  +0.103044] kauditd_printk_skb: 202 callbacks suppressed
	I1014 08:47:24.174542   15224 command_runner.go:130] > [  +3.884891] systemd-fstab-generator[1614]: Ignoring "noauto" option for root device
	I1014 08:47:24.174542   15224 command_runner.go:130] > [  +1.232270] kauditd_printk_skb: 44 callbacks suppressed
	I1014 08:47:24.174542   15224 command_runner.go:130] > [  +5.880292] kauditd_printk_skb: 30 callbacks suppressed
	I1014 08:47:24.174542   15224 command_runner.go:130] > [  +4.216972] systemd-fstab-generator[2460]: Ignoring "noauto" option for root device
	I1014 08:47:24.174542   15224 command_runner.go:130] > [ +15.813728] kauditd_printk_skb: 72 callbacks suppressed
	I1014 08:47:24.174542   15224 logs.go:123] Gathering logs for coredns [d9831e9f8ce8] ...
	I1014 08:47:24.174542   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9831e9f8ce8"
	I1014 08:47:24.215500   15224 command_runner.go:130] > .:53
	I1014 08:47:24.215601   15224 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	I1014 08:47:24.215601   15224 command_runner.go:130] > CoreDNS-1.11.3
	I1014 08:47:24.215601   15224 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I1014 08:47:24.215601   15224 command_runner.go:130] > [INFO] 127.0.0.1:35483 - 39257 "HINFO IN 8382239991273371198.8905610076788717940. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.074337261s
	I1014 08:47:24.215601   15224 command_runner.go:130] > [INFO] 10.244.1.2:36950 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003062s
	I1014 08:47:24.215601   15224 command_runner.go:130] > [INFO] 10.244.1.2:49277 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.118118924s
	I1014 08:47:24.215722   15224 command_runner.go:130] > [INFO] 10.244.1.2:33122 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.153089702s
	I1014 08:47:24.215722   15224 command_runner.go:130] > [INFO] 10.244.1.2:44549 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.188160849s
	I1014 08:47:24.215722   15224 command_runner.go:130] > [INFO] 10.244.0.3:43390 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191s
	I1014 08:47:24.215803   15224 command_runner.go:130] > [INFO] 10.244.0.3:59817 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000279499s
	I1014 08:47:24.215803   15224 command_runner.go:130] > [INFO] 10.244.0.3:34294 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.0002004s
	I1014 08:47:24.215803   15224 command_runner.go:130] > [INFO] 10.244.0.3:56220 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0002257s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:44291 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002098s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:42361 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.17965629s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:48756 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002923s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:53437 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000274799s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:60026 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.013560692s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:39241 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001752s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:36696 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0003084s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:51603 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001109s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:37516 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002057s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:41090 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001329s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:45330 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001397s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:46520 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000154699s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:40342 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001347s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:60385 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001518s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:56338 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001527s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:50480 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0002505s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:60726 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002297s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:44347 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002249s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:49092 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001075s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:54937 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:59749 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002552s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:56008 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002499s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:60338 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001065s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:49712 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001634s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:56508 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001577s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:45387 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001545s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:39608 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001419s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:53878 - 5 "PTR IN 1.96.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0001923s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:58589 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002828s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:58608 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001609s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:52599 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000633s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:58233 - 5 "PTR IN 1.96.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0001058s
	I1014 08:47:24.216397   15224 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I1014 08:47:24.216397   15224 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I1014 08:47:24.219691   15224 logs.go:123] Gathering logs for kindnet [fcdf89a3ac8c] ...
	I1014 08:47:24.219753   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcdf89a3ac8c"
	I1014 08:47:24.269436   15224 command_runner.go:130] ! I1014 15:32:44.862261       1 main.go:300] handling current node
	I1014 08:47:24.269824   15224 command_runner.go:130] ! I1014 15:32:44.862301       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.269824   15224 command_runner.go:130] ! I1014 15:32:44.862313       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.269947   15224 command_runner.go:130] ! I1014 15:32:44.862605       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.269947   15224 command_runner.go:130] ! I1014 15:32:44.862636       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.269947   15224 command_runner.go:130] ! I1014 15:32:54.862103       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.269947   15224 command_runner.go:130] ! I1014 15:32:54.862232       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.269947   15224 command_runner.go:130] ! I1014 15:32:54.862979       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.269947   15224 command_runner.go:130] ! I1014 15:32:54.863013       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.269947   15224 command_runner.go:130] ! I1014 15:32:54.863219       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.270251   15224 command_runner.go:130] ! I1014 15:32:54.863233       1 main.go:300] handling current node
	I1014 08:47:24.270298   15224 command_runner.go:130] ! I1014 15:33:04.864377       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.270298   15224 command_runner.go:130] ! I1014 15:33:04.864510       1 main.go:300] handling current node
	I1014 08:47:24.270298   15224 command_runner.go:130] ! I1014 15:33:04.864534       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.270985   15224 command_runner.go:130] ! I1014 15:33:04.864544       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.270985   15224 command_runner.go:130] ! I1014 15:33:04.864795       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.270985   15224 command_runner.go:130] ! I1014 15:33:04.864807       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.271372   15224 command_runner.go:130] ! I1014 15:33:14.870098       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.272323   15224 command_runner.go:130] ! I1014 15:33:14.870279       1 main.go:300] handling current node
	I1014 08:47:24.272323   15224 command_runner.go:130] ! I1014 15:33:14.870319       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.272323   15224 command_runner.go:130] ! I1014 15:33:14.870394       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.272543   15224 command_runner.go:130] ! I1014 15:33:14.872221       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.273384   15224 command_runner.go:130] ! I1014 15:33:14.872265       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.273749   15224 command_runner.go:130] ! I1014 15:33:24.862168       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.274059   15224 command_runner.go:130] ! I1014 15:33:24.862234       1 main.go:300] handling current node
	I1014 08:47:24.275045   15224 command_runner.go:130] ! I1014 15:33:24.862290       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.275142   15224 command_runner.go:130] ! I1014 15:33:24.862303       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.275167   15224 command_runner.go:130] ! I1014 15:33:24.862799       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.275167   15224 command_runner.go:130] ! I1014 15:33:24.862950       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.275167   15224 command_runner.go:130] ! I1014 15:33:34.870712       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.275167   15224 command_runner.go:130] ! I1014 15:33:34.870952       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.275167   15224 command_runner.go:130] ! I1014 15:33:34.871749       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.275167   15224 command_runner.go:130] ! I1014 15:33:34.871848       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.275167   15224 command_runner.go:130] ! I1014 15:33:34.872312       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.275167   15224 command_runner.go:130] ! I1014 15:33:34.872409       1 main.go:300] handling current node
	I1014 08:47:24.275167   15224 command_runner.go:130] ! I1014 15:33:44.868271       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.275756   15224 command_runner.go:130] ! I1014 15:33:44.868442       1 main.go:300] handling current node
	I1014 08:47:24.275831   15224 command_runner.go:130] ! I1014 15:33:44.868482       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.275874   15224 command_runner.go:130] ! I1014 15:33:44.868509       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:33:44.869165       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:33:44.869252       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:33:54.862162       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:33:54.862365       1 main.go:300] handling current node
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:33:54.862404       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:33:54.862429       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:33:54.862766       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:33:54.862800       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:04.870860       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:04.870993       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:04.871751       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:04.871830       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:04.872365       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:04.872444       1 main.go:300] handling current node
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:14.868274       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:14.868410       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:14.869151       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:14.869244       1 main.go:300] handling current node
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:14.869263       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:14.869271       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:24.869326       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:24.869383       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:24.870365       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:24.870464       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:24.871197       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:24.871235       1 main.go:300] handling current node
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:34.862280       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:34.862387       1 main.go:300] handling current node
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:34.862420       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:34.862440       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:34.862809       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:34.862844       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.276439   15224 command_runner.go:130] ! I1014 15:34:44.870611       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.276439   15224 command_runner.go:130] ! I1014 15:34:44.870703       1 main.go:300] handling current node
	I1014 08:47:24.276439   15224 command_runner.go:130] ! I1014 15:34:44.870732       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.276517   15224 command_runner.go:130] ! I1014 15:34:44.870826       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.276517   15224 command_runner.go:130] ! I1014 15:34:44.871348       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.276517   15224 command_runner.go:130] ! I1014 15:34:44.871437       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.276573   15224 command_runner.go:130] ! I1014 15:34:54.862260       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.276573   15224 command_runner.go:130] ! I1014 15:34:54.862358       1 main.go:300] handling current node
	I1014 08:47:24.276573   15224 command_runner.go:130] ! I1014 15:34:54.862379       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:34:54.862388       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:34:54.862782       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:34:54.862862       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:04.871418       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:04.871489       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:04.872322       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:04.872416       1 main.go:300] handling current node
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:04.872437       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:04.872445       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:14.870301       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:14.870413       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:14.870922       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:14.870941       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:14.871055       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:14.871086       1 main.go:300] handling current node
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:24.870776       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:24.870814       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:24.871449       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:24.871682       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:24.872057       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:24.872149       1 main.go:300] handling current node
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:34.871155       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:34.871422       1 main.go:300] handling current node
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:34.876612       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:34.876630       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:34.876808       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:34.876817       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:44.872405       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:44.872450       1 main.go:300] handling current node
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:44.872467       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:44.872473       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:44.873120       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:44.873155       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:54.862113       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:54.862220       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:54.862608       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:54.862725       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:54.862993       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:54.863089       1 main.go:300] handling current node
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:04.870594       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:04.870634       1 main.go:300] handling current node
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:04.870705       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:04.870719       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:04.871246       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:04.871261       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:14.862194       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:14.862337       1 main.go:300] handling current node
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:14.862361       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:14.862370       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:14.863024       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:14.863053       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:24.870839       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:24.871114       1 main.go:300] handling current node
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:24.871303       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:24.871618       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:24.872052       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:24.872164       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:34.870320       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:34.870375       1 main.go:300] handling current node
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:34.870396       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:34.870404       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:34.870774       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:34.870810       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:44.864305       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:44.864530       1 main.go:300] handling current node
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:44.864616       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:44.864683       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:44.865206       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:44.865241       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:54.862701       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:54.862834       1 main.go:300] handling current node
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:54.862940       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:54.863054       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:54.864321       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:54.864397       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:04.863761       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:04.863854       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:04.864505       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:04.864638       1 main.go:300] handling current node
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:04.864656       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:04.864664       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:14.866293       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:14.866653       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:14.867034       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:14.867067       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:14.867179       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:14.867247       1 main.go:300] handling current node
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:24.867969       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:24.868019       1 main.go:300] handling current node
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:24.868036       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:24.868043       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:24.868511       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:24.868549       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:34.863786       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:34.864224       1 main.go:300] handling current node
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:34.864384       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:34.864448       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:34.864771       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:34.864865       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:44.871310       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:44.871352       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:44.871803       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:44.871837       1 main.go:300] handling current node
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:44.871852       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:44.871859       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:54.862573       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:54.862694       1 main.go:300] handling current node
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:54.862714       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:54.862723       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:54.863288       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:54.863364       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:04.872124       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:04.872285       1 main.go:300] handling current node
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:04.872330       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:04.872343       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:04.873184       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:04.873352       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:14.863654       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:14.863788       1 main.go:300] handling current node
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:14.863812       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:14.863822       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:14.864488       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:14.864585       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:24.868537       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:24.868643       1 main.go:300] handling current node
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:24.868664       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:24.868672       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:24.869258       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:24.869347       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:34.864233       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:34.864469       1 main.go:300] handling current node
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:34.864497       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:34.864509       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:34.865023       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:34.865061       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:44.870754       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:44.870859       1 main.go:300] handling current node
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:44.870919       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:44.870931       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:44.871124       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:44.871155       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:54.862849       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:54.863008       1 main.go:300] handling current node
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:54.863029       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:54.863040       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:54.863313       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:54.863343       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:04.861865       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:04.862353       1 main.go:300] handling current node
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:04.862819       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:04.863053       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:04.863648       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:04.865127       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:14.870473       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:14.870526       1 main.go:300] handling current node
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:14.870544       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:14.870551       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:14.871123       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:14.871161       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:24.862264       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:24.862304       1 main.go:300] handling current node
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:24.862323       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:24.862331       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:24.863326       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:24.863417       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:34.862868       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:34.863041       1 main.go:300] handling current node
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:34.863063       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:34.863072       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:34.863370       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:34.863460       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:44.872051       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:44.872175       1 main.go:300] handling current node
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:44.872198       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:44.872392       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:44.873038       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:44.873160       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:54.862953       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:54.862990       1 main.go:300] handling current node
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:54.863013       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:54.863022       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:54.863377       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:54.863412       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:40:04.864160       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:40:04.864198       1 main.go:300] handling current node
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:40:04.864216       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:04.864222       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:04.864390       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:04.864399       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:14.862864       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:14.863081       1 main.go:300] handling current node
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:14.863442       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:14.863496       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:14.864019       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:14.864052       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:24.867383       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:24.867717       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:24.868487       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:24.868619       1 main.go:300] handling current node
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:24.868640       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:24.868650       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:34.866060       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:34.866194       1 main.go:300] handling current node
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:34.866224       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:34.866240       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:34.867632       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:34.867868       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:44.875002       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:44.875336       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:44.875792       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:44.875991       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:44.876302       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:44.876531       1 main.go:300] handling current node
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:54.862504       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:54.862640       1 main.go:300] handling current node
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:54.862766       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:54.862834       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:54.863108       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:54.863140       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:04.863181       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:04.863304       1 main.go:300] handling current node
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:04.863326       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:04.863335       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:04.863824       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:04.863963       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:14.868270       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:14.868443       1 main.go:300] handling current node
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:14.868487       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:14.868541       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:14.868808       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:14.868843       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:24.862261       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:24.862508       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:24.863242       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:24.863792       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:24.864172       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:24.864327       1 main.go:300] handling current node
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:34.862294       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:34.862355       1 main.go:300] handling current node
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:34.862377       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:34.862385       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:44.862674       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:44.862799       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:44.863254       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.20.102.29 Flags: [] Table: 0 Realm: 0} 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:44.863509       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:44.863768       1 main.go:300] handling current node
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:44.863945       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:44.864052       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:54.862083       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:54.862208       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:54.862577       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:54.862723       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:54.863005       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:54.863097       1 main.go:300] handling current node
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:42:04.870504       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:42:04.871039       1 main.go:300] handling current node
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:42:04.871167       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:42:04.871277       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:04.871721       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:04.871740       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:14.862252       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:14.862455       1 main.go:300] handling current node
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:14.862499       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:14.862521       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:14.863189       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:14.863224       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:24.862819       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:24.863072       1 main.go:300] handling current node
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:24.863093       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:24.863103       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:24.864093       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:24.864136       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:34.863373       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:34.863425       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:34.863670       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:34.863742       1 main.go:300] handling current node
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:34.863763       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:34.863771       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:44.861842       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:44.862176       1 main.go:300] handling current node
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:44.862271       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:44.862357       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:44.862743       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:44.863009       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:54.863140       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:54.863181       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:54.863865       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:54.864051       1 main.go:300] handling current node
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:54.864417       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:54.864427       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:04.862539       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:04.862625       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:04.863289       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:04.863395       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:04.863612       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:04.863764       1 main.go:300] handling current node
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:14.871242       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:14.871727       1 main.go:300] handling current node
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:14.871818       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:14.871846       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:14.872085       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:14.872201       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:24.871405       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:24.871540       1 main.go:300] handling current node
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:24.871566       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:24.871575       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:24.871835       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:24.872193       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:34.863042       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:34.863237       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:34.863962       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:34.864059       1 main.go:300] handling current node
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:34.864077       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:34.864085       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:44.871016       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:44.871057       1 main.go:300] handling current node
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:44.871074       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:44.871081       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:44.871299       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:44.871310       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.298501   15224 logs.go:123] Gathering logs for container status ...
	I1014 08:47:24.298501   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 08:47:24.363797   15224 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I1014 08:47:24.363923   15224 command_runner.go:130] > 1adddc667bd90       8c811b4aec35f                                                                                         4 seconds ago        Running             busybox                   1                   9092d17516eb3       busybox-7dff88458-vlp7j
	I1014 08:47:24.363978   15224 command_runner.go:130] > 5d223e2e64fcd       c69fa2e9cbf5f                                                                                         4 seconds ago        Running             coredns                   1                   429b989a1a986       coredns-7c65d6cfc9-fs9ct
	I1014 08:47:24.363978   15224 command_runner.go:130] > 9d526b02ee41c       6e38f40d628db                                                                                         22 seconds ago       Running             storage-provisioner       2                   cdcdd532ba136       storage-provisioner
	I1014 08:47:24.363978   15224 command_runner.go:130] > bba035362eb97       3a5bc24055c9e                                                                                         About a minute ago   Running             kindnet-cni               1                   7bcadf1f0885f       kindnet-wqrx6
	I1014 08:47:24.364067   15224 command_runner.go:130] > c76c258568107       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   cdcdd532ba136       storage-provisioner
	I1014 08:47:24.364097   15224 command_runner.go:130] > e83db276dec37       60c005f310ff3                                                                                         About a minute ago   Running             kube-proxy                1                   6f8bdf552734e       kube-proxy-r74dx
	I1014 08:47:24.364097   15224 command_runner.go:130] > 48c8492e231e1       2e96e5913fc06                                                                                         About a minute ago   Running             etcd                      0                   0697a11790e80       etcd-multinode-671000
	I1014 08:47:24.364189   15224 command_runner.go:130] > 8af48c446f7e1       175ffd71cce3d                                                                                         About a minute ago   Running             kube-controller-manager   1                   7bd4c36606eef       kube-controller-manager-multinode-671000
	I1014 08:47:24.364239   15224 command_runner.go:130] > a834664fc8b80       6bab7719df100                                                                                         About a minute ago   Running             kube-apiserver            0                   6155e8be2d5d7       kube-apiserver-multinode-671000
	I1014 08:47:24.364268   15224 command_runner.go:130] > d428685276e1e       9aa1fad941575                                                                                         About a minute ago   Running             kube-scheduler            1                   1d3033f871fb1       kube-scheduler-multinode-671000
	I1014 08:47:24.364268   15224 command_runner.go:130] > cbf0b40e378b9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   06e529266db4b       busybox-7dff88458-vlp7j
	I1014 08:47:24.364331   15224 command_runner.go:130] > d9831e9f8ce8c       c69fa2e9cbf5f                                                                                         24 minutes ago       Exited              coredns                   0                   2f8cc9a218fef       coredns-7c65d6cfc9-fs9ct
	I1014 08:47:24.364390   15224 command_runner.go:130] > fcdf89a3ac8ce       kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387              24 minutes ago       Exited              kindnet-cni               0                   5e48ddcfdf90a       kindnet-wqrx6
	I1014 08:47:24.364432   15224 command_runner.go:130] > ea19428d70363       60c005f310ff3                                                                                         24 minutes ago       Exited              kube-proxy                0                   7144d8ce208cf       kube-proxy-r74dx
	I1014 08:47:24.364477   15224 command_runner.go:130] > 661e75bbf6b46       9aa1fad941575                                                                                         24 minutes ago       Exited              kube-scheduler            0                   2dc78387553ff       kube-scheduler-multinode-671000
	I1014 08:47:24.364545   15224 command_runner.go:130] > 712aad669c9f6       175ffd71cce3d                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   bfdde08319e32       kube-controller-manager-multinode-671000
	I1014 08:47:24.366660   15224 logs.go:123] Gathering logs for coredns [5d223e2e64fc] ...
	I1014 08:47:24.366660   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d223e2e64fc"
	I1014 08:47:24.400410   15224 command_runner.go:130] > .:53
	I1014 08:47:24.400712   15224 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	I1014 08:47:24.400712   15224 command_runner.go:130] > CoreDNS-1.11.3
	I1014 08:47:24.400712   15224 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I1014 08:47:24.400712   15224 command_runner.go:130] > [INFO] 127.0.0.1:42996 - 9104 "HINFO IN 5434967794797104596.5472118418078127170. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.148386647s
	I1014 08:47:24.400945   15224 logs.go:123] Gathering logs for kindnet [bba035362eb9] ...
	I1014 08:47:24.401030   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bba035362eb9"
	I1014 08:47:24.436616   15224 command_runner.go:130] ! I1014 15:46:18.000845       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1014 08:47:24.437607   15224 command_runner.go:130] ! I1014 15:46:18.015386       1 main.go:139] hostIP = 172.20.106.123
	I1014 08:47:24.437607   15224 command_runner.go:130] ! podIP = 172.20.106.123
	I1014 08:47:24.437607   15224 command_runner.go:130] ! I1014 15:46:18.015613       1 main.go:148] setting mtu 1500 for CNI 
	I1014 08:47:24.437695   15224 command_runner.go:130] ! I1014 15:46:18.015630       1 main.go:178] kindnetd IP family: "ipv4"
	I1014 08:47:24.437695   15224 command_runner.go:130] ! I1014 15:46:18.015641       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I1014 08:47:24.437695   15224 command_runner.go:130] ! I1014 15:46:18.919987       1 main.go:238] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-37: Error: Could not process rule: Operation not supported
	I1014 08:47:24.437767   15224 command_runner.go:130] ! add table inet kube-network-policies
	I1014 08:47:24.437767   15224 command_runner.go:130] ! ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	I1014 08:47:24.437767   15224 command_runner.go:130] ! , skipping network policies
	I1014 08:47:24.437767   15224 command_runner.go:130] ! W1014 15:46:48.934772       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1014 08:47:24.437846   15224 command_runner.go:130] ! E1014 15:46:48.935157       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError"
	I1014 08:47:24.437846   15224 command_runner.go:130] ! I1014 15:46:58.925780       1 main.go:296] Handling node with IPs: map[172.20.106.123:{}]
	I1014 08:47:24.437846   15224 command_runner.go:130] ! I1014 15:46:58.926393       1 main.go:300] handling current node
	I1014 08:47:24.437916   15224 command_runner.go:130] ! I1014 15:46:58.927562       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.437916   15224 command_runner.go:130] ! I1014 15:46:58.927665       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.437916   15224 command_runner.go:130] ! I1014 15:46:58.928645       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.20.109.137 Flags: [] Table: 0 Realm: 0} 
	I1014 08:47:24.437977   15224 command_runner.go:130] ! I1014 15:46:58.929412       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.437977   15224 command_runner.go:130] ! I1014 15:46:58.929466       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.438026   15224 command_runner.go:130] ! I1014 15:46:58.929555       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.20.102.29 Flags: [] Table: 0 Realm: 0} 
	I1014 08:47:24.438057   15224 command_runner.go:130] ! I1014 15:47:08.930440       1 main.go:296] Handling node with IPs: map[172.20.106.123:{}]
	I1014 08:47:24.438057   15224 command_runner.go:130] ! I1014 15:47:08.930586       1 main.go:300] handling current node
	I1014 08:47:24.438057   15224 command_runner.go:130] ! I1014 15:47:08.930648       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.438057   15224 command_runner.go:130] ! I1014 15:47:08.930739       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.438057   15224 command_runner.go:130] ! I1014 15:47:08.931080       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.438057   15224 command_runner.go:130] ! I1014 15:47:08.931268       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.438057   15224 command_runner.go:130] ! I1014 15:47:18.921538       1 main.go:296] Handling node with IPs: map[172.20.106.123:{}]
	I1014 08:47:24.438057   15224 command_runner.go:130] ! I1014 15:47:18.921639       1 main.go:300] handling current node
	I1014 08:47:24.438057   15224 command_runner.go:130] ! I1014 15:47:18.921689       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.438057   15224 command_runner.go:130] ! I1014 15:47:18.921698       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.438057   15224 command_runner.go:130] ! I1014 15:47:18.922117       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.438057   15224 command_runner.go:130] ! I1014 15:47:18.922190       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.440700   15224 logs.go:123] Gathering logs for etcd [48c8492e231e] ...
	I1014 08:47:24.440700   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48c8492e231e"
	I1014 08:47:24.469663   15224 command_runner.go:130] ! {"level":"warn","ts":"2024-10-14T15:46:11.845953Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1014 08:47:24.470625   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.848739Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.20.106.123:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.20.106.123:2380","--initial-cluster=multinode-671000=https://172.20.106.123:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.20.106.123:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.20.106.123:2380","--name=multinode-671000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I1014 08:47:24.470651   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.848857Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I1014 08:47:24.470651   15224 command_runner.go:130] ! {"level":"warn","ts":"2024-10-14T15:46:11.848886Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1014 08:47:24.470651   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.848900Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://172.20.106.123:2380"]}
	I1014 08:47:24.470723   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.848962Z","caller":"embed/etcd.go:496","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1014 08:47:24.470723   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.854418Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.20.106.123:2379"]}
	I1014 08:47:24.470891   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.857036Z","caller":"embed/etcd.go:310","msg":"starting an etcd server","etcd-version":"3.5.15","git-sha":"9a5533382","go-version":"go1.21.12","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-671000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.20.106.123:2380"],"listen-peer-urls":["https://172.20.106.123:2380"],"advertise-client-urls":["https://172.20.106.123:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.20.106.123:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"in
itial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I1014 08:47:24.470956   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.899392Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"40.66952ms"}
	I1014 08:47:24.470983   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.949173Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I1014 08:47:24.471030   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.984197Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"2dcbff584edb18cc","local-member-id":"782c48cbdf98397b","commit-index":2088}
	I1014 08:47:24.471030   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.985089Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b switched to configuration voters=()"}
	I1014 08:47:24.471030   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.987739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b became follower at term 2"}
	I1014 08:47:24.471030   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.987772Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 782c48cbdf98397b [peers: [], term: 2, commit: 2088, applied: 0, lastindex: 2088, lastterm: 2]"}
	I1014 08:47:24.471160   15224 command_runner.go:130] ! {"level":"warn","ts":"2024-10-14T15:46:12.003567Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I1014 08:47:24.471160   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.010981Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1396}
	I1014 08:47:24.471321   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.025362Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":1813}
	I1014 08:47:24.471321   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.035174Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I1014 08:47:24.471321   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.045608Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"782c48cbdf98397b","timeout":"7s"}
	I1014 08:47:24.471406   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.046705Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"782c48cbdf98397b"}
	I1014 08:47:24.471406   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.046807Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"782c48cbdf98397b","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	I1014 08:47:24.471406   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.047198Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	I1014 08:47:24.471473   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.047977Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I1014 08:47:24.471532   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.048044Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I1014 08:47:24.471532   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.048058Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I1014 08:47:24.471532   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.048736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b switched to configuration voters=(8659376223993477499)"}
	I1014 08:47:24.471532   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.049262Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2dcbff584edb18cc","local-member-id":"782c48cbdf98397b","added-peer-id":"782c48cbdf98397b","added-peer-peer-urls":["https://172.20.100.167:2380"]}
	I1014 08:47:24.471532   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.049694Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2dcbff584edb18cc","local-member-id":"782c48cbdf98397b","cluster-version":"3.5"}
	I1014 08:47:24.471532   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.049815Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I1014 08:47:24.471532   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.056204Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1014 08:47:24.471532   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.062166Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1014 08:47:24.471532   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.062574Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"782c48cbdf98397b","initial-advertise-peer-urls":["https://172.20.106.123:2380"],"listen-peer-urls":["https://172.20.106.123:2380"],"advertise-client-urls":["https://172.20.106.123:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.20.106.123:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I1014 08:47:24.471532   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.062654Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I1014 08:47:24.471532   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.062749Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"172.20.106.123:2380"}
	I1014 08:47:24.471532   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.062764Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"172.20.106.123:2380"}
	I1014 08:47:24.471532   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b is starting a new election at term 2"}
	I1014 08:47:24.472162   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b became pre-candidate at term 2"}
	I1014 08:47:24.472162   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b received MsgPreVoteResp from 782c48cbdf98397b at term 2"}
	I1014 08:47:24.472162   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b became candidate at term 3"}
	I1014 08:47:24.472261   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b received MsgVoteResp from 782c48cbdf98397b at term 3"}
	I1014 08:47:24.472261   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b became leader at term 3"}
	I1014 08:47:24.472261   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 782c48cbdf98397b elected leader 782c48cbdf98397b at term 3"}
	I1014 08:47:24.472346   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.496949Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I1014 08:47:24.472372   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.496902Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"782c48cbdf98397b","local-member-attributes":"{Name:multinode-671000 ClientURLs:[https://172.20.106.123:2379]}","request-path":"/0/members/782c48cbdf98397b/attributes","cluster-id":"2dcbff584edb18cc","publish-timeout":"7s"}
	I1014 08:47:24.472402   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.497822Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I1014 08:47:24.472402   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.499586Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I1014 08:47:24.472402   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.499631Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I1014 08:47:24.472402   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.500815Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1014 08:47:24.472402   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.502392Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.20.106.123:2379"}
	I1014 08:47:24.472402   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.503879Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1014 08:47:24.472402   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.505686Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I1014 08:47:24.479270   15224 logs.go:123] Gathering logs for kube-proxy [e83db276dec3] ...
	I1014 08:47:24.479270   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83db276dec3"
	I1014 08:47:24.511770   15224 command_runner.go:130] ! I1014 15:46:17.821967       1 server_linux.go:66] "Using iptables proxy"
	I1014 08:47:24.512352   15224 command_runner.go:130] ! E1014 15:46:17.985243       1 proxier.go:734] "Error cleaning up nftables rules" err=<
	I1014 08:47:24.512352   15224 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I1014 08:47:24.512429   15224 command_runner.go:130] ! 	add table ip kube-proxy
	I1014 08:47:24.512429   15224 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I1014 08:47:24.512429   15224 command_runner.go:130] !  >
	I1014 08:47:24.513427   15224 command_runner.go:130] ! E1014 15:46:18.020523       1 proxier.go:734] "Error cleaning up nftables rules" err=<
	I1014 08:47:24.513943   15224 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I1014 08:47:24.513943   15224 command_runner.go:130] ! 	add table ip6 kube-proxy
	I1014 08:47:24.513943   15224 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I1014 08:47:24.513943   15224 command_runner.go:130] !  >
	I1014 08:47:24.513943   15224 command_runner.go:130] ! I1014 15:46:18.173230       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.20.106.123"]
	I1014 08:47:24.513943   15224 command_runner.go:130] ! E1014 15:46:18.173392       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 08:47:24.513943   15224 command_runner.go:130] ! I1014 15:46:18.286207       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 08:47:24.514109   15224 command_runner.go:130] ! I1014 15:46:18.287289       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 08:47:24.514109   15224 command_runner.go:130] ! I1014 15:46:18.287905       1 server_linux.go:169] "Using iptables Proxier"
	I1014 08:47:24.514109   15224 command_runner.go:130] ! I1014 15:46:18.293792       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 08:47:24.514109   15224 command_runner.go:130] ! I1014 15:46:18.300740       1 server.go:483] "Version info" version="v1.31.1"
	I1014 08:47:24.514196   15224 command_runner.go:130] ! I1014 15:46:18.300778       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:24.514196   15224 command_runner.go:130] ! I1014 15:46:18.305824       1 config.go:199] "Starting service config controller"
	I1014 08:47:24.514196   15224 command_runner.go:130] ! I1014 15:46:18.308209       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 08:47:24.514257   15224 command_runner.go:130] ! I1014 15:46:18.308868       1 config.go:328] "Starting node config controller"
	I1014 08:47:24.514257   15224 command_runner.go:130] ! I1014 15:46:18.314183       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 08:47:24.514257   15224 command_runner.go:130] ! I1014 15:46:18.309398       1 config.go:105] "Starting endpoint slice config controller"
	I1014 08:47:24.514257   15224 command_runner.go:130] ! I1014 15:46:18.317842       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 08:47:24.514320   15224 command_runner.go:130] ! I1014 15:46:18.419882       1 shared_informer.go:320] Caches are synced for node config
	I1014 08:47:24.514320   15224 command_runner.go:130] ! I1014 15:46:18.419918       1 shared_informer.go:320] Caches are synced for service config
	I1014 08:47:24.514320   15224 command_runner.go:130] ! I1014 15:46:18.435586       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 08:47:24.516804   15224 logs.go:123] Gathering logs for kube-controller-manager [712aad669c9f] ...
	I1014 08:47:24.517328   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712aad669c9f"
	I1014 08:47:24.557349   15224 command_runner.go:130] ! I1014 15:22:34.276457       1 serving.go:386] Generated self-signed cert in-memory
	I1014 08:47:24.557880   15224 command_runner.go:130] ! I1014 15:22:34.721812       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I1014 08:47:24.557880   15224 command_runner.go:130] ! I1014 15:22:34.722099       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:24.557880   15224 command_runner.go:130] ! I1014 15:22:34.724748       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1014 08:47:24.557880   15224 command_runner.go:130] ! I1014 15:22:34.725085       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 08:47:24.557993   15224 command_runner.go:130] ! I1014 15:22:34.725754       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1014 08:47:24.557993   15224 command_runner.go:130] ! I1014 15:22:34.725985       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 08:47:24.557993   15224 command_runner.go:130] ! I1014 15:22:39.207411       1 controllermanager.go:797] "Started controller" controller="serviceaccount-token-controller"
	I1014 08:47:24.558095   15224 command_runner.go:130] ! I1014 15:22:39.208026       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I1014 08:47:24.558153   15224 command_runner.go:130] ! I1014 15:22:39.207651       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I1014 08:47:24.558153   15224 command_runner.go:130] ! I1014 15:22:39.210064       1 shared_informer.go:320] Caches are synced for tokens
	I1014 08:47:24.558153   15224 command_runner.go:130] ! I1014 15:22:39.224528       1 controllermanager.go:797] "Started controller" controller="taint-eviction-controller"
	I1014 08:47:24.558153   15224 command_runner.go:130] ! I1014 15:22:39.224966       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I1014 08:47:24.558223   15224 command_runner.go:130] ! I1014 15:22:39.225213       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I1014 08:47:24.558223   15224 command_runner.go:130] ! I1014 15:22:39.226734       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I1014 08:47:24.558284   15224 command_runner.go:130] ! I1014 15:22:39.238395       1 controllermanager.go:797] "Started controller" controller="pod-garbage-collector-controller"
	I1014 08:47:24.558309   15224 command_runner.go:130] ! I1014 15:22:39.238610       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I1014 08:47:24.558309   15224 command_runner.go:130] ! I1014 15:22:39.239186       1 shared_informer.go:313] Waiting for caches to sync for GC
	I1014 08:47:24.558350   15224 command_runner.go:130] ! I1014 15:22:39.257957       1 controllermanager.go:797] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I1014 08:47:24.558395   15224 command_runner.go:130] ! I1014 15:22:39.258113       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I1014 08:47:24.558395   15224 command_runner.go:130] ! I1014 15:22:39.264110       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I1014 08:47:24.558395   15224 command_runner.go:130] ! I1014 15:22:39.291746       1 controllermanager.go:797] "Started controller" controller="token-cleaner-controller"
	I1014 08:47:24.558458   15224 command_runner.go:130] ! I1014 15:22:39.291968       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I1014 08:47:24.558458   15224 command_runner.go:130] ! I1014 15:22:39.292012       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I1014 08:47:24.558458   15224 command_runner.go:130] ! I1014 15:22:39.292035       1 shared_informer.go:320] Caches are synced for token_cleaner
	I1014 08:47:24.558458   15224 command_runner.go:130] ! E1014 15:22:39.298368       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I1014 08:47:24.558546   15224 command_runner.go:130] ! I1014 15:22:39.298490       1 controllermanager.go:775] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.320068       1 controllermanager.go:797] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.321579       1 pvc_protection_controller.go:105] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.322507       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.334562       1 controllermanager.go:797] "Started controller" controller="endpointslice-mirroring-controller"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.335065       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.335174       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.357454       1 controllermanager.go:797] "Started controller" controller="serviceaccount-controller"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.357636       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.357669       1 shared_informer.go:313] Waiting for caches to sync for service account
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.377687       1 controllermanager.go:797] "Started controller" controller="daemonset-controller"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.378056       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.378087       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.416186       1 controllermanager.go:797] "Started controller" controller="disruption-controller"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.416643       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.417022       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.417371       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.469032       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.469507       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.469770       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.470779       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-signing-controller"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.470793       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.471453       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.470805       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.470829       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.471957       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.470841       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.470861       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.472955       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.470870       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:24.559113   15224 command_runner.go:130] ! I1014 15:22:39.621859       1 controllermanager.go:797] "Started controller" controller="persistentvolume-binder-controller"
	I1014 08:47:24.559113   15224 command_runner.go:130] ! I1014 15:22:39.622638       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I1014 08:47:24.559179   15224 command_runner.go:130] ! I1014 15:22:39.623052       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I1014 08:47:24.559179   15224 command_runner.go:130] ! I1014 15:22:39.777984       1 controllermanager.go:797] "Started controller" controller="root-ca-certificate-publisher-controller"
	I1014 08:47:24.559179   15224 command_runner.go:130] ! I1014 15:22:39.778063       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I1014 08:47:24.559179   15224 command_runner.go:130] ! I1014 15:22:39.778141       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I1014 08:47:24.559179   15224 command_runner.go:130] ! I1014 15:22:39.918879       1 controllermanager.go:797] "Started controller" controller="replicationcontroller-controller"
	I1014 08:47:24.559179   15224 command_runner.go:130] ! I1014 15:22:39.919046       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I1014 08:47:24.559179   15224 command_runner.go:130] ! I1014 15:22:39.919060       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I1014 08:47:24.559179   15224 command_runner.go:130] ! I1014 15:22:40.166453       1 controllermanager.go:797] "Started controller" controller="garbage-collector-controller"
	I1014 08:47:24.559414   15224 command_runner.go:130] ! I1014 15:22:40.167822       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I1014 08:47:24.559414   15224 command_runner.go:130] ! I1014 15:22:40.168483       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1014 08:47:24.559414   15224 command_runner.go:130] ! I1014 15:22:40.168745       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I1014 08:47:24.559414   15224 command_runner.go:130] ! I1014 15:22:40.423412       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I1014 08:47:24.559514   15224 command_runner.go:130] ! I1014 15:22:40.423795       1 controllermanager.go:797] "Started controller" controller="node-ipam-controller"
	I1014 08:47:24.559514   15224 command_runner.go:130] ! I1014 15:22:40.424239       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I1014 08:47:24.559583   15224 command_runner.go:130] ! I1014 15:22:40.424496       1 controllermanager.go:775] "Warning: skipping controller" controller="node-route-controller"
	I1014 08:47:24.559611   15224 command_runner.go:130] ! I1014 15:22:40.424173       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I1014 08:47:24.559645   15224 command_runner.go:130] ! I1014 15:22:40.425286       1 shared_informer.go:313] Waiting for caches to sync for node
	I1014 08:47:24.559645   15224 command_runner.go:130] ! I1014 15:22:40.570482       1 controllermanager.go:797] "Started controller" controller="persistentvolume-expander-controller"
	I1014 08:47:24.559645   15224 command_runner.go:130] ! I1014 15:22:40.570669       1 expand_controller.go:328] "Starting expand controller" logger="persistentvolume-expander-controller"
	I1014 08:47:24.559704   15224 command_runner.go:130] ! I1014 15:22:40.570684       1 shared_informer.go:313] Waiting for caches to sync for expand
	I1014 08:47:24.559704   15224 command_runner.go:130] ! I1014 15:22:40.718742       1 controllermanager.go:797] "Started controller" controller="ttl-after-finished-controller"
	I1014 08:47:24.559761   15224 command_runner.go:130] ! I1014 15:22:40.718766       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I1014 08:47:24.559761   15224 command_runner.go:130] ! I1014 15:22:40.718828       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I1014 08:47:24.559805   15224 command_runner.go:130] ! I1014 15:22:40.718839       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I1014 08:47:24.559805   15224 command_runner.go:130] ! I1014 15:22:40.875244       1 controllermanager.go:797] "Started controller" controller="endpointslice-controller"
	I1014 08:47:24.559805   15224 command_runner.go:130] ! I1014 15:22:40.875390       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I1014 08:47:24.559869   15224 command_runner.go:130] ! I1014 15:22:40.875405       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.022254       1 controllermanager.go:797] "Started controller" controller="replicaset-controller"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.023099       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.023161       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.176342       1 controllermanager.go:797] "Started controller" controller="statefulset-controller"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.176460       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.176471       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.319171       1 controllermanager.go:797] "Started controller" controller="persistentvolume-protection-controller"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.319300       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.319332       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.469263       1 controllermanager.go:797] "Started controller" controller="clusterrole-aggregation-controller"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.469488       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.470311       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.618471       1 controllermanager.go:797] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.618507       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.619582       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.813364       1 controllermanager.go:797] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.813412       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:42.123997       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:42.124656       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:42.125147       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:42.125502       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:42.125684       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:42.125715       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:42.125765       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:42.125789       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:42.125821       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I1014 08:47:24.560455   15224 command_runner.go:130] ! I1014 15:22:42.125850       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I1014 08:47:24.560505   15224 command_runner.go:130] ! I1014 15:22:42.125919       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I1014 08:47:24.560505   15224 command_runner.go:130] ! I1014 15:22:42.125938       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I1014 08:47:24.560505   15224 command_runner.go:130] ! W1014 15:22:42.125970       1 shared_informer.go:597] resyncPeriod 22h30m25.60471532s is smaller than resyncCheckPeriod 23h6m49.635307948s and the informer has already started. Changing it to 23h6m49.635307948s
	I1014 08:47:24.560505   15224 command_runner.go:130] ! W1014 15:22:42.126028       1 shared_informer.go:597] resyncPeriod 22h40m57.132720005s is smaller than resyncCheckPeriod 23h6m49.635307948s and the informer has already started. Changing it to 23h6m49.635307948s
	I1014 08:47:24.560505   15224 command_runner.go:130] ! I1014 15:22:42.126215       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I1014 08:47:24.560612   15224 command_runner.go:130] ! I1014 15:22:42.126353       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I1014 08:47:24.560612   15224 command_runner.go:130] ! I1014 15:22:42.126435       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I1014 08:47:24.560612   15224 command_runner.go:130] ! I1014 15:22:42.126461       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.126498       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.126514       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.126546       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.126572       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.126591       1 controllermanager.go:797] "Started controller" controller="resourcequota-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.127139       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.127191       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.127239       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.377410       1 controllermanager.go:797] "Started controller" controller="namespace-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.378109       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.378533       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.520088       1 controllermanager.go:797] "Started controller" controller="job-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.520194       1 job_controller.go:226] "Starting job controller" logger="job-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.520661       1 shared_informer.go:313] Waiting for caches to sync for job
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.669141       1 controllermanager.go:797] "Started controller" controller="ttl-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.669227       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.669239       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.713738       1 node_lifecycle_controller.go:430] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.713795       1 controllermanager.go:797] "Started controller" controller="node-lifecycle-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.713972       1 node_lifecycle_controller.go:464] "Sending events to api server" logger="node-lifecycle-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.714019       1 node_lifecycle_controller.go:475] "Starting node controller" logger="node-lifecycle-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.714028       1 shared_informer.go:313] Waiting for caches to sync for taint
	I1014 08:47:24.560676   15224 command_runner.go:130] ! E1014 15:22:42.870353       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.870400       1 controllermanager.go:775] "Warning: skipping controller" controller="service-lb-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:43.022018       1 controllermanager.go:797] "Started controller" controller="persistentvolume-attach-detach-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:43.022670       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:43.022756       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:43.169053       1 controllermanager.go:797] "Started controller" controller="ephemeral-volume-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:43.169165       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:43.169572       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I1014 08:47:24.561203   15224 command_runner.go:130] ! I1014 15:22:43.319453       1 controllermanager.go:797] "Started controller" controller="endpoints-controller"
	I1014 08:47:24.561203   15224 command_runner.go:130] ! I1014 15:22:43.319620       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I1014 08:47:24.561283   15224 command_runner.go:130] ! I1014 15:22:43.319648       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I1014 08:47:24.561283   15224 command_runner.go:130] ! I1014 15:22:43.471065       1 controllermanager.go:797] "Started controller" controller="deployment-controller"
	I1014 08:47:24.561283   15224 command_runner.go:130] ! I1014 15:22:43.471807       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I1014 08:47:24.561283   15224 command_runner.go:130] ! I1014 15:22:43.472102       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I1014 08:47:24.561283   15224 command_runner.go:130] ! I1014 15:22:43.621382       1 controllermanager.go:797] "Started controller" controller="cronjob-controller"
	I1014 08:47:24.561384   15224 command_runner.go:130] ! I1014 15:22:43.621522       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I1014 08:47:24.561407   15224 command_runner.go:130] ! I1014 15:22:43.621537       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I1014 08:47:24.561407   15224 command_runner.go:130] ! I1014 15:22:43.663267       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-approving-controller"
	I1014 08:47:24.561474   15224 command_runner.go:130] ! I1014 15:22:43.663415       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I1014 08:47:24.561500   15224 command_runner.go:130] ! I1014 15:22:43.663427       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.822946       1 controllermanager.go:797] "Started controller" controller="bootstrap-signer-controller"
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.822992       1 controllermanager.go:775] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.823061       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.863507       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.863638       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.863659       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.902554       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.913563       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.916687       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000\" does not exist"
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.921355       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.921578       1 shared_informer.go:320] Caches are synced for cronjob
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.921709       1 shared_informer.go:320] Caches are synced for job
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.921822       1 shared_informer.go:320] Caches are synced for PV protection
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.922806       1 shared_informer.go:320] Caches are synced for PVC protection
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.922814       1 shared_informer.go:320] Caches are synced for TTL after finished
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.924127       1 shared_informer.go:320] Caches are synced for persistent volume
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.924751       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.925596       1 shared_informer.go:320] Caches are synced for node
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.925653       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.925863       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.925961       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.925971       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.927918       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.933656       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.935993       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.939827       1 shared_informer.go:320] Caches are synced for GC
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.945652       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-671000" podCIDRs=["10.244.0.0/24"]
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.945733       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.946434       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.958217       1 shared_informer.go:320] Caches are synced for service account
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.964566       1 shared_informer.go:320] Caches are synced for HPA
	I1014 08:47:24.562071   15224 command_runner.go:130] ! I1014 15:22:43.970909       1 shared_informer.go:320] Caches are synced for TTL
	I1014 08:47:24.562071   15224 command_runner.go:130] ! I1014 15:22:43.971119       1 shared_informer.go:320] Caches are synced for ephemeral
	I1014 08:47:24.562121   15224 command_runner.go:130] ! I1014 15:22:43.971337       1 shared_informer.go:320] Caches are synced for expand
	I1014 08:47:24.562121   15224 command_runner.go:130] ! I1014 15:22:43.975501       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1014 08:47:24.562165   15224 command_runner.go:130] ! I1014 15:22:43.976796       1 shared_informer.go:320] Caches are synced for stateful set
	I1014 08:47:24.562165   15224 command_runner.go:130] ! I1014 15:22:43.978344       1 shared_informer.go:320] Caches are synced for crt configmap
	I1014 08:47:24.562165   15224 command_runner.go:130] ! I1014 15:22:43.978435       1 shared_informer.go:320] Caches are synced for daemon sets
	I1014 08:47:24.562275   15224 command_runner.go:130] ! I1014 15:22:43.980084       1 shared_informer.go:320] Caches are synced for namespace
	I1014 08:47:24.562452   15224 command_runner.go:130] ! I1014 15:22:44.014728       1 shared_informer.go:320] Caches are synced for taint
	I1014 08:47:24.562452   15224 command_runner.go:130] ! I1014 15:22:44.015046       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1014 08:47:24.562452   15224 command_runner.go:130] ! I1014 15:22:44.015932       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000"
	I1014 08:47:24.562452   15224 command_runner.go:130] ! I1014 15:22:44.016156       1 node_lifecycle_controller.go:1036] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1014 08:47:24.562542   15224 command_runner.go:130] ! I1014 15:22:44.020094       1 shared_informer.go:320] Caches are synced for endpoint
	I1014 08:47:24.562542   15224 command_runner.go:130] ! I1014 15:22:44.020640       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1014 08:47:24.562542   15224 command_runner.go:130] ! I1014 15:22:44.071958       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1014 08:47:24.562609   15224 command_runner.go:130] ! I1014 15:22:44.103447       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 08:47:24.562637   15224 command_runner.go:130] ! I1014 15:22:44.118642       1 shared_informer.go:320] Caches are synced for disruption
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:44.123565       1 shared_informer.go:320] Caches are synced for attach detach
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:44.124082       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:44.128052       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:44.164601       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:44.170410       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:44.172085       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:44.172168       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:44.172762       1 shared_informer.go:320] Caches are synced for deployment
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:44.173998       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:44.583260       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:44.634360       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:44.669630       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:44.669841       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:45.450540       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="308.738304ms"
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:45.524372       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="73.173482ms"
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:45.524478       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="65.397µs"
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:46.000395       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="74.724912ms"
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:46.017930       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="16.329807ms"
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:46.018255       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="275.988µs"
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:23:06.558708       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:23:06.579629       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:23:06.601705       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="101.399µs"
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:23:06.643522       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="58.099µs"
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:23:08.868021       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="148.904µs"
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:23:08.936155       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="33.695698ms"
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:23:08.939220       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="3.012072ms"
	I1014 08:47:24.563195   15224 command_runner.go:130] ! I1014 15:23:09.023157       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1014 08:47:24.563257   15224 command_runner.go:130] ! I1014 15:23:10.921399       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:24.563257   15224 command_runner.go:130] ! I1014 15:25:49.920125       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m02\" does not exist"
	I1014 08:47:24.563257   15224 command_runner.go:130] ! I1014 15:25:49.955308       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-671000-m02" podCIDRs=["10.244.1.0/24"]
	I1014 08:47:24.563257   15224 command_runner.go:130] ! I1014 15:25:49.956041       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:25:49.956493       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:25:50.332394       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:25:50.885049       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:25:54.059204       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:25:54.342262       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:26:00.157293       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:26:18.720546       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:26:18.720611       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:26:18.738467       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:26:19.084143       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:26:20.411603       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:26:44.435156       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="75.721873ms"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:26:44.496244       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="60.852418ms"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:26:44.496945       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="131.501µs"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:26:44.540742       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="117.6µs"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:26:47.680512       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.465591ms"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:26:47.680616       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="27.8µs"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:26:47.878633       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.308091ms"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:26:47.878779       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="30.7µs"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:26:50.724728       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:27:15.823577       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:30:35.115559       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:30:35.116078       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m03\" does not exist"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:30:35.128392       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-671000-m03" podCIDRs=["10.244.2.0/24"]
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:30:35.128677       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:30:35.128924       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:30:35.152829       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:30:35.373296       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:30:35.920577       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:30:39.132287       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000-m03"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:30:39.151825       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:30:45.490553       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:31:04.306000       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:31:04.306453       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m03"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:31:04.323636       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:31:05.841789       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:31:09.153752       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:31:56.911043       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:32:21.316935       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:36:11.719246       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:37:02.446841       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:37:26.676097       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:38:59.261991       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:38:59.262728       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:38:59.286871       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:39:04.424423       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:24.025444       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:24.063975       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:29.184402       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:29.185577       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:34.952323       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m03\" does not exist"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:34.952330       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:34.966125       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-671000-m03" podCIDRs=["10.244.3.0/24"]
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:34.966148       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:34.966505       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:34.987165       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:35.003234       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:35.540526       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:39.448073       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:45.343875       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:53.719761       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:53.720945       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:53.741507       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:54.369330       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:42:08.557249       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:42:32.770970       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:43:29.631595       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:43:29.632207       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:43:29.853526       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:43:35.163131       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:43:45.119758       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:43:45.151031       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:43:45.251625       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.269341ms"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:43:45.252472       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="121.1µs"
	I1014 08:47:24.584229   15224 logs.go:123] Gathering logs for kubelet ...
	I1014 08:47:24.584229   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 08:47:24.613872   15224 command_runner.go:130] > Oct 14 15:46:05 multinode-671000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1014 08:47:24.613872   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1480]: I1014 15:46:06.037054    1480 server.go:486] "Kubelet version" kubeletVersion="v1.31.1"
	I1014 08:47:24.613872   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1480]: I1014 15:46:06.037147    1480 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:24.613872   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1480]: I1014 15:46:06.038385    1480 server.go:929] "Client rotation is on, will bootstrap in background"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1480]: E1014 15:46:06.039788    1480 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1540]: I1014 15:46:06.835721    1540 server.go:486] "Kubelet version" kubeletVersion="v1.31.1"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1540]: I1014 15:46:06.835931    1540 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1540]: I1014 15:46:06.836250    1540 server.go:929] "Client rotation is on, will bootstrap in background"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1540]: E1014 15:46:06.836463    1540 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:07 multinode-671000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.687712    1622 server.go:486] "Kubelet version" kubeletVersion="v1.31.1"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.688474    1622 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.689105    1622 server.go:929] "Client rotation is on, will bootstrap in background"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.691939    1622 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.718455    1622 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.739709    1622 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.739760    1622 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.744155    1622 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.744395    1622 server.go:812] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.744486    1622 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.744668    1622 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.744761    1622 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-671000","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.746378    1622 topology_manager.go:138] "Creating topology manager with none policy"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.746460    1622 container_manager_linux.go:300] "Creating device plugin manager"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.746633    1622 state_mem.go:36] "Initialized new in-memory state store"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.749964    1622 kubelet.go:408] "Attempting to sync node with API server"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.750004    1622 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.750036    1622 kubelet.go:314] "Adding apiserver pod source"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.750844    1622 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.756693    1622 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="docker" version="27.3.1" apiVersion="v1"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.763816    1622 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: W1014 15:46:09.764725    1622 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.766925    1622 server.go:1269] "Started kubelet"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: W1014 15:46:09.767088    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-671000&limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.767172    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-671000&limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: W1014 15:46:09.769189    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.769350    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.769454    1622 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.770134    1622 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.772237    1622 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.20.106.123:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-671000.17fe5c47a6bff791  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-671000,UID:multinode-671000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-671000,},FirstTimestamp:2024-10-14 15:46:09.766881169 +0000 UTC m=+0.158153075,LastTimestamp:2024-10-14 15:46:09.766881169 +0000 UTC m=+0.158153075,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-6
71000,}"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.773096    1622 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.774576    1622 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.777686    1622 server.go:460] "Adding debug handlers to kubelet server"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.780950    1622 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.788697    1622 volume_manager.go:289] "Starting Kubelet Volume Manager"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.789003    1622 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"multinode-671000\" not found"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.789640    1622 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.800447    1622 factory.go:221] Registration of the systemd container factory successfully
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.800536    1622 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.800587    1622 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: W1014 15:46:09.811192    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.811498    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.812017    1622 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-671000?timeout=10s\": dial tcp 172.20.106.123:8443: connect: connection refused" interval="200ms"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.863497    1622 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.863530    1622 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.863554    1622 state_mem.go:36] "Initialized new in-memory state store"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.868881    1622 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.868953    1622 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.868995    1622 policy_none.go:49] "None policy: Start"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.872200    1622 reconciler.go:26] "Reconciler: start to sync state"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.877834    1622 memory_manager.go:170] "Starting memorymanager" policy="None"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.877929    1622 state_mem.go:35] "Initializing new in-memory state store"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.878704    1622 state_mem.go:75] "Updated machine memory state"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.884555    1622 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.885687    1622 eviction_manager.go:189] "Eviction manager: starting control loop"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.885828    1622 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.889524    1622 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-671000\" not found"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.892062    1622 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.900012    1622 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.905094    1622 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.906277    1622 status_manager.go:217] "Starting to sync pod status with apiserver"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.906885    1622 kubelet.go:2321] "Starting kubelet main sync loop"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.907458    1622 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: W1014 15:46:09.914061    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.914371    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.933056    1622 iptables.go:577] "Could not set up iptables canary" err=<
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.987581    1622 kubelet_node_status.go:72] "Attempting to register node" node="multinode-671000"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.988812    1622 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.20.106.123:8443: connect: connection refused" node="multinode-671000"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.008458    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f8cc9a218fef4446839b0253827fc2dd89f8eccbd66ab7f0e5123c033f0aacc"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.013887    1622 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-671000?timeout=10s\": dial tcp 172.20.106.123:8443: connect: connection refused" interval="400ms"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.014354    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5733d27d2f1c328dbd19f6392a86e426f344b6f17c65211404fa797e84b69c9"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.014436    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06e529266db4b2cf3ad09285cddeb89d7d11e858b5156cf73f41198c9500b9f2"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.014506    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e48ddcfdf90ad3bfbe621f27c97a331f448947ca77dbd98ab3c9daef2c84e22"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.020161    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dc78387553ff4b78626f5e6aa103a40ec97f42ef49363e27d7d3698cd0df26f"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.035902    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c6be2bd1889b6c5f021362c07c3a88f7f0ff266bb9e8ba4106d666b0f1d267d"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.049024    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1863de70f2316e54fa61ef7c5c6aba94808669b81b1cc811dce745011ee807cb"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.065264    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7144d8ce208cf8c176ad1fc9980a72d450a3d558c4f8f9ee453dea6b22358085"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.079145    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfdde08319e32b93d740933d5ab50829de8f9f3edacce92efe155b4ada4f4212"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.179820    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/864b69f35cb25e9dd5d87a753a055a10-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-671000\" (UID: \"864b69f35cb25e9dd5d87a753a055a10\") " pod="kube-system/kube-apiserver-multinode-671000"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.179915    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b83e31864e0ae98d29d960866012ecb0-k8s-certs\") pod \"kube-controller-manager-multinode-671000\" (UID: \"b83e31864e0ae98d29d960866012ecb0\") " pod="kube-system/kube-controller-manager-multinode-671000"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.179945    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b83e31864e0ae98d29d960866012ecb0-kubeconfig\") pod \"kube-controller-manager-multinode-671000\" (UID: \"b83e31864e0ae98d29d960866012ecb0\") " pod="kube-system/kube-controller-manager-multinode-671000"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.179963    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3e987cfaedc75c39145e8fc131c60c81-kubeconfig\") pod \"kube-scheduler-multinode-671000\" (UID: \"3e987cfaedc75c39145e8fc131c60c81\") " pod="kube-system/kube-scheduler-multinode-671000"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.179984    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/679486a8f27de5805bc2e87fb1920dce-etcd-certs\") pod \"etcd-multinode-671000\" (UID: \"679486a8f27de5805bc2e87fb1920dce\") " pod="kube-system/etcd-multinode-671000"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180012    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/679486a8f27de5805bc2e87fb1920dce-etcd-data\") pod \"etcd-multinode-671000\" (UID: \"679486a8f27de5805bc2e87fb1920dce\") " pod="kube-system/etcd-multinode-671000"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180036    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/864b69f35cb25e9dd5d87a753a055a10-ca-certs\") pod \"kube-apiserver-multinode-671000\" (UID: \"864b69f35cb25e9dd5d87a753a055a10\") " pod="kube-system/kube-apiserver-multinode-671000"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180050    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/864b69f35cb25e9dd5d87a753a055a10-k8s-certs\") pod \"kube-apiserver-multinode-671000\" (UID: \"864b69f35cb25e9dd5d87a753a055a10\") " pod="kube-system/kube-apiserver-multinode-671000"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180068    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b83e31864e0ae98d29d960866012ecb0-ca-certs\") pod \"kube-controller-manager-multinode-671000\" (UID: \"b83e31864e0ae98d29d960866012ecb0\") " pod="kube-system/kube-controller-manager-multinode-671000"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180089    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b83e31864e0ae98d29d960866012ecb0-flexvolume-dir\") pod \"kube-controller-manager-multinode-671000\" (UID: \"b83e31864e0ae98d29d960866012ecb0\") " pod="kube-system/kube-controller-manager-multinode-671000"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180113    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b83e31864e0ae98d29d960866012ecb0-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-671000\" (UID: \"b83e31864e0ae98d29d960866012ecb0\") " pod="kube-system/kube-controller-manager-multinode-671000"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.191857    1622 kubelet_node_status.go:72] "Attempting to register node" node="multinode-671000"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.193195    1622 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.20.106.123:8443: connect: connection refused" node="multinode-671000"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.421148    1622 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-671000?timeout=10s\": dial tcp 172.20.106.123:8443: connect: connection refused" interval="800ms"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.595286    1622 kubelet_node_status.go:72] "Attempting to register node" node="multinode-671000"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.596178    1622 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.20.106.123:8443: connect: connection refused" node="multinode-671000"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: W1014 15:46:10.601172    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.601259    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: W1014 15:46:10.913794    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.913870    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: W1014 15:46:11.078571    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: E1014 15:46:11.078638    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: W1014 15:46:11.151154    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-671000&limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: E1014 15:46:11.151247    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-671000&limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: E1014 15:46:11.223425    1622 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-671000?timeout=10s\": dial tcp 172.20.106.123:8443: connect: connection refused" interval="1.6s"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: I1014 15:46:11.306759    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0697a11790e806fa4d679e3d97be4c7193692d1cb8f76d882cf3a75aa8e0c238"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: I1014 15:46:11.397496    1622 kubelet_node_status.go:72] "Attempting to register node" node="multinode-671000"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: E1014 15:46:11.399409    1622 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.20.106.123:8443: connect: connection refused" node="multinode-671000"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:13 multinode-671000 kubelet[1622]: I1014 15:46:13.001489    1622 kubelet_node_status.go:72] "Attempting to register node" node="multinode-671000"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.316022    1622 kubelet_node_status.go:111] "Node was previously registered" node="multinode-671000"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.316194    1622 kubelet_node_status.go:75] "Successfully registered node" node="multinode-671000"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.316226    1622 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.317405    1622 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.318741    1622 setters.go:600] "Node became not ready" node="multinode-671000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-10-14T15:46:15Z","lastTransitionTime":"2024-10-14T15:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.671751    1622 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-multinode-671000\" already exists" pod="kube-system/etcd-multinode-671000"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.765668    1622 apiserver.go:52] "Watching apiserver"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.771464    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.772813    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.774456    1622 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-671000" podUID="80ea37b8-9db1-4a39-9e9e-51c01edadfb1"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.790436    1622 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.804744    1622 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-671000"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.875635    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8d14473-8859-4015-84e9-d00656cc00c9-xtables-lock\") pod \"kube-proxy-r74dx\" (UID: \"f8d14473-8859-4015-84e9-d00656cc00c9\") " pod="kube-system/kube-proxy-r74dx"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.875831    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a508bbf9-7565-4c73-98cb-e9684985c298-xtables-lock\") pod \"kindnet-wqrx6\" (UID: \"a508bbf9-7565-4c73-98cb-e9684985c298\") " pod="kube-system/kindnet-wqrx6"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.876217    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8d14473-8859-4015-84e9-d00656cc00c9-lib-modules\") pod \"kube-proxy-r74dx\" (UID: \"f8d14473-8859-4015-84e9-d00656cc00c9\") " pod="kube-system/kube-proxy-r74dx"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.876424    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fde8ff75-bc7f-4db4-b098-c3a08b38d205-tmp\") pod \"storage-provisioner\" (UID: \"fde8ff75-bc7f-4db4-b098-c3a08b38d205\") " pod="kube-system/storage-provisioner"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.876537    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a508bbf9-7565-4c73-98cb-e9684985c298-cni-cfg\") pod \"kindnet-wqrx6\" (UID: \"a508bbf9-7565-4c73-98cb-e9684985c298\") " pod="kube-system/kindnet-wqrx6"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.876562    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a508bbf9-7565-4c73-98cb-e9684985c298-lib-modules\") pod \"kindnet-wqrx6\" (UID: \"a508bbf9-7565-4c73-98cb-e9684985c298\") " pod="kube-system/kindnet-wqrx6"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.877886    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.877952    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:16.377930736 +0000 UTC m=+6.769202642 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.896550    1622 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.904462    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.904557    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.904737    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:16.404658149 +0000 UTC m=+6.795930055 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.919872    1622 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cf38ccc62eb74f6e658e1f66ae8cab1" path="/var/lib/kubelet/pods/3cf38ccc62eb74f6e658e1f66ae8cab1/volumes"
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.921055    1622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-671000" podStartSLOduration=0.921039556 podStartE2EDuration="921.039556ms" podCreationTimestamp="2024-10-14 15:46:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-14 15:46:15.920643156 +0000 UTC m=+6.311915162" watchObservedRunningTime="2024-10-14 15:46:15.921039556 +0000 UTC m=+6.312311562"
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.921516    1622 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="778fdb620bffec66f911bf24e3c8210b" path="/var/lib/kubelet/pods/778fdb620bffec66f911bf24e3c8210b/volumes"
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.380142    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.380233    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:17.380214172 +0000 UTC m=+7.771486078 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.480798    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.480831    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.480915    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:17.480897019 +0000 UTC m=+7.872168925 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: I1014 15:46:16.655226    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f8bdf552734e59df94d489a081585cdaeeaee729f0a0ae92105117d4d744f3f"
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: I1014 15:46:16.670380    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdcdd532ba1369fe0e866b321f18e0bd99a88a2699c325eea17a070d48f2f024"
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: I1014 15:46:16.981444    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bcadf1f0885ff3411b477a64c6af366c77eb264ce7a761f174b079b11e2e703"
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: I1014 15:46:16.981500    1622 kubelet.go:1895] "Trying to delete pod" pod="kube-system/etcd-multinode-671000" podUID="56dfdf16-1224-41e3-94de-9d7f4021a17d"
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.982831    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: I1014 15:46:17.011276    1622 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-671000"
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.388224    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.388370    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:19.388351245 +0000 UTC m=+9.779623151 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.489591    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.489649    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.489828    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:19.489808492 +0000 UTC m=+9.881080398 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.915482    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:18 multinode-671000 kubelet[1622]: I1014 15:46:18.163696    1622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-671000" podStartSLOduration=1.163677409 podStartE2EDuration="1.163677409s" podCreationTimestamp="2024-10-14 15:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-14 15:46:18.133766095 +0000 UTC m=+8.525038101" watchObservedRunningTime="2024-10-14 15:46:18.163677409 +0000 UTC m=+8.554949415"
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:18 multinode-671000 kubelet[1622]: E1014 15:46:18.908674    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.405477    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.405614    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:23.405594191 +0000 UTC m=+13.796866097 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.506858    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.507035    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.507122    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:23.507105839 +0000 UTC m=+13.898377845 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.931507    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:20 multinode-671000 kubelet[1622]: E1014 15:46:20.907760    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:21 multinode-671000 kubelet[1622]: E1014 15:46:21.908994    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:22 multinode-671000 kubelet[1622]: E1014 15:46:22.908657    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.462111    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.462203    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:31.462185592 +0000 UTC m=+21.853457598 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.562508    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.562563    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.562768    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:31.562650785 +0000 UTC m=+21.953922691 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.910119    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:24 multinode-671000 kubelet[1622]: E1014 15:46:24.908917    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:25 multinode-671000 kubelet[1622]: E1014 15:46:25.909505    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:26 multinode-671000 kubelet[1622]: E1014 15:46:26.907750    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:27 multinode-671000 kubelet[1622]: E1014 15:46:27.908822    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:28 multinode-671000 kubelet[1622]: E1014 15:46:28.908219    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:29 multinode-671000 kubelet[1622]: E1014 15:46:29.910218    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:30 multinode-671000 kubelet[1622]: E1014 15:46:30.908259    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.541520    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.541653    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:47.541634578 +0000 UTC m=+37.932906484 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.641930    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.641961    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.642009    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:47.641990935 +0000 UTC m=+38.033262841 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.908383    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:32 multinode-671000 kubelet[1622]: E1014 15:46:32.908527    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:33 multinode-671000 kubelet[1622]: E1014 15:46:33.910838    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:34 multinode-671000 kubelet[1622]: E1014 15:46:34.908180    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:35 multinode-671000 kubelet[1622]: E1014 15:46:35.908574    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:36 multinode-671000 kubelet[1622]: E1014 15:46:36.907722    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:37 multinode-671000 kubelet[1622]: E1014 15:46:37.907861    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:38 multinode-671000 kubelet[1622]: E1014 15:46:38.908728    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:39 multinode-671000 kubelet[1622]: E1014 15:46:39.908994    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:40 multinode-671000 kubelet[1622]: E1014 15:46:40.908676    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:41 multinode-671000 kubelet[1622]: E1014 15:46:41.909525    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:42 multinode-671000 kubelet[1622]: E1014 15:46:42.908679    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:43 multinode-671000 kubelet[1622]: E1014 15:46:43.908615    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:44 multinode-671000 kubelet[1622]: E1014 15:46:44.908884    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:45 multinode-671000 kubelet[1622]: E1014 15:46:45.908370    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:46 multinode-671000 kubelet[1622]: E1014 15:46:46.909263    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.573240    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.573353    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:47:19.573334644 +0000 UTC m=+69.964606650 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.673810    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.673907    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.674014    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:47:19.673994259 +0000 UTC m=+70.065266165 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.908883    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:48 multinode-671000 kubelet[1622]: I1014 15:46:48.486803    1622 scope.go:117] "RemoveContainer" containerID="3d8b7bae48a59c755a1ffda14e7fdd0c2302b394db67b7de21fd5b819dad243b"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:48 multinode-671000 kubelet[1622]: I1014 15:46:48.487259    1622 scope.go:117] "RemoveContainer" containerID="c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:48 multinode-671000 kubelet[1622]: E1014 15:46:48.487448    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(fde8ff75-bc7f-4db4-b098-c3a08b38d205)\"" pod="kube-system/storage-provisioner" podUID="fde8ff75-bc7f-4db4-b098-c3a08b38d205"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:48 multinode-671000 kubelet[1622]: E1014 15:46:48.908732    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:49 multinode-671000 kubelet[1622]: E1014 15:46:49.908877    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:50 multinode-671000 kubelet[1622]: E1014 15:46:50.907718    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:51 multinode-671000 kubelet[1622]: E1014 15:46:51.909552    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:52 multinode-671000 kubelet[1622]: E1014 15:46:52.908818    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:53 multinode-671000 kubelet[1622]: E1014 15:46:53.908389    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:54 multinode-671000 kubelet[1622]: E1014 15:46:54.908089    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:55 multinode-671000 kubelet[1622]: E1014 15:46:55.908582    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:56 multinode-671000 kubelet[1622]: E1014 15:46:56.908839    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:57 multinode-671000 kubelet[1622]: E1014 15:46:57.909489    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:58 multinode-671000 kubelet[1622]: E1014 15:46:58.908804    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:59 multinode-671000 kubelet[1622]: I1014 15:46:59.853068    1622 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:47:02 multinode-671000 kubelet[1622]: I1014 15:47:02.908981    1622 scope.go:117] "RemoveContainer" containerID="c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6"
	I1014 08:47:24.623844   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]: I1014 15:47:09.901385    1622 scope.go:117] "RemoveContainer" containerID="0b5a6e440d7b67606ed0a4dfa4d07715b1fd7e6f53bc0b8779f86a33c5baf6e9"
	I1014 08:47:24.623844   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]: I1014 15:47:09.946936    1622 scope.go:117] "RemoveContainer" containerID="1ba3cd8bbd5963097f4d674fc98eca21e1a710f5a150a067747aa4e6c922d2fe"
	I1014 08:47:24.623844   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]: E1014 15:47:09.949713    1622 iptables.go:577] "Could not set up iptables canary" err=<
	I1014 08:47:24.623844   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I1014 08:47:24.623844   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I1014 08:47:24.623844   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I1014 08:47:24.623844   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I1014 08:47:24.665830   15224 logs.go:123] Gathering logs for describe nodes ...
	I1014 08:47:24.665830   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 08:47:24.938923   15224 command_runner.go:130] > Name:               multinode-671000
	I1014 08:47:24.938923   15224 command_runner.go:130] > Roles:              control-plane
	I1014 08:47:24.938923   15224 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I1014 08:47:24.938923   15224 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I1014 08:47:24.938923   15224 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I1014 08:47:24.938923   15224 command_runner.go:130] >                     kubernetes.io/hostname=multinode-671000
	I1014 08:47:24.938923   15224 command_runner.go:130] >                     kubernetes.io/os=linux
	I1014 08:47:24.938923   15224 command_runner.go:130] >                     minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	I1014 08:47:24.938923   15224 command_runner.go:130] >                     minikube.k8s.io/name=multinode-671000
	I1014 08:47:24.938923   15224 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I1014 08:47:24.938923   15224 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_10_14T08_22_41_0700
	I1014 08:47:24.938923   15224 command_runner.go:130] >                     minikube.k8s.io/version=v1.34.0
	I1014 08:47:24.938923   15224 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I1014 08:47:24.938923   15224 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I1014 08:47:24.938923   15224 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I1014 08:47:24.938923   15224 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I1014 08:47:24.938923   15224 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I1014 08:47:24.938923   15224 command_runner.go:130] > CreationTimestamp:  Mon, 14 Oct 2024 15:22:36 +0000
	I1014 08:47:24.938923   15224 command_runner.go:130] > Taints:             <none>
	I1014 08:47:24.938923   15224 command_runner.go:130] > Unschedulable:      false
	I1014 08:47:24.938923   15224 command_runner.go:130] > Lease:
	I1014 08:47:24.938923   15224 command_runner.go:130] >   HolderIdentity:  multinode-671000
	I1014 08:47:24.938923   15224 command_runner.go:130] >   AcquireTime:     <unset>
	I1014 08:47:24.938923   15224 command_runner.go:130] >   RenewTime:       Mon, 14 Oct 2024 15:47:16 +0000
	I1014 08:47:24.938923   15224 command_runner.go:130] > Conditions:
	I1014 08:47:24.938923   15224 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I1014 08:47:24.938923   15224 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I1014 08:47:24.938923   15224 command_runner.go:130] >   MemoryPressure   False   Mon, 14 Oct 2024 15:46:59 +0000   Mon, 14 Oct 2024 15:22:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I1014 08:47:24.938923   15224 command_runner.go:130] >   DiskPressure     False   Mon, 14 Oct 2024 15:46:59 +0000   Mon, 14 Oct 2024 15:22:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I1014 08:47:24.938923   15224 command_runner.go:130] >   PIDPressure      False   Mon, 14 Oct 2024 15:46:59 +0000   Mon, 14 Oct 2024 15:22:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I1014 08:47:24.938923   15224 command_runner.go:130] >   Ready            True    Mon, 14 Oct 2024 15:46:59 +0000   Mon, 14 Oct 2024 15:46:59 +0000   KubeletReady                 kubelet is posting ready status
	I1014 08:47:24.938923   15224 command_runner.go:130] > Addresses:
	I1014 08:47:24.938923   15224 command_runner.go:130] >   InternalIP:  172.20.106.123
	I1014 08:47:24.938923   15224 command_runner.go:130] >   Hostname:    multinode-671000
	I1014 08:47:24.938923   15224 command_runner.go:130] > Capacity:
	I1014 08:47:24.938923   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:24.938923   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:24.938923   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:24.938923   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:24.938923   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:24.938923   15224 command_runner.go:130] > Allocatable:
	I1014 08:47:24.938923   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:24.938923   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:24.938923   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:24.938923   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:24.938923   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:24.938923   15224 command_runner.go:130] > System Info:
	I1014 08:47:24.938923   15224 command_runner.go:130] >   Machine ID:                 fc389f3b9e2846b4b909cfc8e7984541
	I1014 08:47:24.938923   15224 command_runner.go:130] >   System UUID:                f72bc210-2ad8-1e4f-ad7d-3e75159c3d98
	I1014 08:47:24.938923   15224 command_runner.go:130] >   Boot ID:                    98d09a99-1eff-402d-837f-6cacdc4463d7
	I1014 08:47:24.938923   15224 command_runner.go:130] >   Kernel Version:             5.10.207
	I1014 08:47:24.938923   15224 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I1014 08:47:24.938923   15224 command_runner.go:130] >   Operating System:           linux
	I1014 08:47:24.938923   15224 command_runner.go:130] >   Architecture:               amd64
	I1014 08:47:24.938923   15224 command_runner.go:130] >   Container Runtime Version:  docker://27.3.1
	I1014 08:47:24.938923   15224 command_runner.go:130] >   Kubelet Version:            v1.31.1
	I1014 08:47:24.938923   15224 command_runner.go:130] >   Kube-Proxy Version:         v1.31.1
	I1014 08:47:24.938923   15224 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I1014 08:47:24.938923   15224 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I1014 08:47:24.938923   15224 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I1014 08:47:24.938923   15224 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I1014 08:47:24.938923   15224 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I1014 08:47:24.938923   15224 command_runner.go:130] >   default                     busybox-7dff88458-vlp7j                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I1014 08:47:24.938923   15224 command_runner.go:130] >   kube-system                 coredns-7c65d6cfc9-fs9ct                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     24m
	I1014 08:47:24.938923   15224 command_runner.go:130] >   kube-system                 etcd-multinode-671000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         67s
	I1014 08:47:24.938923   15224 command_runner.go:130] >   kube-system                 kindnet-wqrx6                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	I1014 08:47:24.938923   15224 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-671000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         69s
	I1014 08:47:24.939921   15224 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-671000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I1014 08:47:24.939921   15224 command_runner.go:130] >   kube-system                 kube-proxy-r74dx                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I1014 08:47:24.939921   15224 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-671000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I1014 08:47:24.939921   15224 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I1014 08:47:24.939921   15224 command_runner.go:130] > Allocated resources:
	I1014 08:47:24.939921   15224 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Resource           Requests     Limits
	I1014 08:47:24.939921   15224 command_runner.go:130] >   --------           --------     ------
	I1014 08:47:24.939921   15224 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I1014 08:47:24.939921   15224 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I1014 08:47:24.939921   15224 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I1014 08:47:24.939921   15224 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I1014 08:47:24.939921   15224 command_runner.go:130] > Events:
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I1014 08:47:24.939921   15224 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  Starting                 24m                kube-proxy       
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  Starting                 66s                kube-proxy       
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node multinode-671000 status is now: NodeHasSufficientMemory
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node multinode-671000 status is now: NodeHasNoDiskPressure
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node multinode-671000 status is now: NodeHasSufficientPID
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-671000 status is now: NodeHasNoDiskPressure
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-671000 status is now: NodeHasSufficientMemory
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-671000 status is now: NodeHasSufficientPID
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  RegisteredNode           24m                node-controller  Node multinode-671000 event: Registered Node multinode-671000 in Controller
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  NodeReady                24m                kubelet          Node multinode-671000 status is now: NodeReady
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  Starting                 75s                kubelet          Starting kubelet.
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  75s                kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  74s (x8 over 75s)  kubelet          Node multinode-671000 status is now: NodeHasSufficientMemory
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    74s (x8 over 75s)  kubelet          Node multinode-671000 status is now: NodeHasNoDiskPressure
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     74s (x7 over 75s)  kubelet          Node multinode-671000 status is now: NodeHasSufficientPID
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  RegisteredNode           66s                node-controller  Node multinode-671000 event: Registered Node multinode-671000 in Controller
	I1014 08:47:24.939921   15224 command_runner.go:130] > Name:               multinode-671000-m02
	I1014 08:47:24.939921   15224 command_runner.go:130] > Roles:              <none>
	I1014 08:47:24.939921   15224 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I1014 08:47:24.939921   15224 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I1014 08:47:24.939921   15224 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I1014 08:47:24.939921   15224 command_runner.go:130] >                     kubernetes.io/hostname=multinode-671000-m02
	I1014 08:47:24.939921   15224 command_runner.go:130] >                     kubernetes.io/os=linux
	I1014 08:47:24.939921   15224 command_runner.go:130] >                     minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	I1014 08:47:24.939921   15224 command_runner.go:130] >                     minikube.k8s.io/name=multinode-671000
	I1014 08:47:24.939921   15224 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I1014 08:47:24.939921   15224 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_10_14T08_25_50_0700
	I1014 08:47:24.939921   15224 command_runner.go:130] >                     minikube.k8s.io/version=v1.34.0
	I1014 08:47:24.939921   15224 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I1014 08:47:24.939921   15224 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I1014 08:47:24.939921   15224 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I1014 08:47:24.939921   15224 command_runner.go:130] > CreationTimestamp:  Mon, 14 Oct 2024 15:25:49 +0000
	I1014 08:47:24.939921   15224 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I1014 08:47:24.939921   15224 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I1014 08:47:24.939921   15224 command_runner.go:130] > Unschedulable:      false
	I1014 08:47:24.939921   15224 command_runner.go:130] > Lease:
	I1014 08:47:24.939921   15224 command_runner.go:130] >   HolderIdentity:  multinode-671000-m02
	I1014 08:47:24.939921   15224 command_runner.go:130] >   AcquireTime:     <unset>
	I1014 08:47:24.939921   15224 command_runner.go:130] >   RenewTime:       Mon, 14 Oct 2024 15:43:00 +0000
	I1014 08:47:24.939921   15224 command_runner.go:130] > Conditions:
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I1014 08:47:24.939921   15224 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I1014 08:47:24.939921   15224 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 14 Oct 2024 15:42:08 +0000   Mon, 14 Oct 2024 15:43:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:24.939921   15224 command_runner.go:130] >   DiskPressure     Unknown   Mon, 14 Oct 2024 15:42:08 +0000   Mon, 14 Oct 2024 15:43:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:24.939921   15224 command_runner.go:130] >   PIDPressure      Unknown   Mon, 14 Oct 2024 15:42:08 +0000   Mon, 14 Oct 2024 15:43:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Ready            Unknown   Mon, 14 Oct 2024 15:42:08 +0000   Mon, 14 Oct 2024 15:43:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:24.939921   15224 command_runner.go:130] > Addresses:
	I1014 08:47:24.939921   15224 command_runner.go:130] >   InternalIP:  172.20.109.137
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Hostname:    multinode-671000-m02
	I1014 08:47:24.939921   15224 command_runner.go:130] > Capacity:
	I1014 08:47:24.939921   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:24.939921   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:24.939921   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:24.939921   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:24.939921   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:24.939921   15224 command_runner.go:130] > Allocatable:
	I1014 08:47:24.939921   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:24.939921   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:24.939921   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:24.940918   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:24.940918   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:24.940918   15224 command_runner.go:130] > System Info:
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Machine ID:                 d57a3470ff3c4f03abf025a19c5c23d9
	I1014 08:47:24.940918   15224 command_runner.go:130] >   System UUID:                d36e677f-0d87-a94f-af28-d8324326f88f
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Boot ID:                    33649cab-156e-4789-8adf-a6fc060b3de7
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Kernel Version:             5.10.207
	I1014 08:47:24.940918   15224 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Operating System:           linux
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Architecture:               amd64
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Container Runtime Version:  docker://27.3.1
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Kubelet Version:            v1.31.1
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Kube-Proxy Version:         v1.31.1
	I1014 08:47:24.940918   15224 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I1014 08:47:24.940918   15224 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I1014 08:47:24.940918   15224 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I1014 08:47:24.940918   15224 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I1014 08:47:24.940918   15224 command_runner.go:130] >   default                     busybox-7dff88458-bnqj6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I1014 08:47:24.940918   15224 command_runner.go:130] >   kube-system                 kindnet-rgbjf              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	I1014 08:47:24.940918   15224 command_runner.go:130] >   kube-system                 kube-proxy-kbpjf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I1014 08:47:24.940918   15224 command_runner.go:130] > Allocated resources:
	I1014 08:47:24.940918   15224 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Resource           Requests   Limits
	I1014 08:47:24.940918   15224 command_runner.go:130] >   --------           --------   ------
	I1014 08:47:24.940918   15224 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I1014 08:47:24.940918   15224 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I1014 08:47:24.940918   15224 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I1014 08:47:24.940918   15224 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I1014 08:47:24.940918   15224 command_runner.go:130] > Events:
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I1014 08:47:24.940918   15224 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-671000-m02 status is now: NodeHasSufficientMemory
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-671000-m02 status is now: NodeHasNoDiskPressure
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-671000-m02 status is now: NodeHasSufficientPID
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-671000-m02 event: Registered Node multinode-671000-m02 in Controller
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Normal  NodeReady                21m                kubelet          Node multinode-671000-m02 status is now: NodeReady
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Normal  NodeNotReady             3m39s              node-controller  Node multinode-671000-m02 status is now: NodeNotReady
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Normal  RegisteredNode           66s                node-controller  Node multinode-671000-m02 event: Registered Node multinode-671000-m02 in Controller
	I1014 08:47:24.940918   15224 command_runner.go:130] > Name:               multinode-671000-m03
	I1014 08:47:24.940918   15224 command_runner.go:130] > Roles:              <none>
	I1014 08:47:24.940918   15224 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I1014 08:47:24.940918   15224 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I1014 08:47:24.940918   15224 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I1014 08:47:24.940918   15224 command_runner.go:130] >                     kubernetes.io/hostname=multinode-671000-m03
	I1014 08:47:24.940918   15224 command_runner.go:130] >                     kubernetes.io/os=linux
	I1014 08:47:24.940918   15224 command_runner.go:130] >                     minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	I1014 08:47:24.940918   15224 command_runner.go:130] >                     minikube.k8s.io/name=multinode-671000
	I1014 08:47:24.940918   15224 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I1014 08:47:24.940918   15224 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_10_14T08_41_35_0700
	I1014 08:47:24.940918   15224 command_runner.go:130] >                     minikube.k8s.io/version=v1.34.0
	I1014 08:47:24.940918   15224 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I1014 08:47:24.940918   15224 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I1014 08:47:24.940918   15224 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I1014 08:47:24.940918   15224 command_runner.go:130] > CreationTimestamp:  Mon, 14 Oct 2024 15:41:34 +0000
	I1014 08:47:24.940918   15224 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I1014 08:47:24.940918   15224 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I1014 08:47:24.940918   15224 command_runner.go:130] > Unschedulable:      false
	I1014 08:47:24.940918   15224 command_runner.go:130] > Lease:
	I1014 08:47:24.940918   15224 command_runner.go:130] >   HolderIdentity:  multinode-671000-m03
	I1014 08:47:24.940918   15224 command_runner.go:130] >   AcquireTime:     <unset>
	I1014 08:47:24.940918   15224 command_runner.go:130] >   RenewTime:       Mon, 14 Oct 2024 15:42:46 +0000
	I1014 08:47:24.940918   15224 command_runner.go:130] > Conditions:
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I1014 08:47:24.940918   15224 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I1014 08:47:24.940918   15224 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 14 Oct 2024 15:41:53 +0000   Mon, 14 Oct 2024 15:43:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:24.940918   15224 command_runner.go:130] >   DiskPressure     Unknown   Mon, 14 Oct 2024 15:41:53 +0000   Mon, 14 Oct 2024 15:43:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:24.940918   15224 command_runner.go:130] >   PIDPressure      Unknown   Mon, 14 Oct 2024 15:41:53 +0000   Mon, 14 Oct 2024 15:43:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Ready            Unknown   Mon, 14 Oct 2024 15:41:53 +0000   Mon, 14 Oct 2024 15:43:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:24.940918   15224 command_runner.go:130] > Addresses:
	I1014 08:47:24.940918   15224 command_runner.go:130] >   InternalIP:  172.20.102.29
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Hostname:    multinode-671000-m03
	I1014 08:47:24.940918   15224 command_runner.go:130] > Capacity:
	I1014 08:47:24.940918   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:24.941933   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:24.941933   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:24.941933   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:24.941933   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:24.941933   15224 command_runner.go:130] > Allocatable:
	I1014 08:47:24.941933   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:24.941933   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:24.941933   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:24.941933   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:24.941933   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:24.941933   15224 command_runner.go:130] > System Info:
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Machine ID:                 6da8cf5e96c04d55b9129d0893534bf2
	I1014 08:47:24.941933   15224 command_runner.go:130] >   System UUID:                49616488-815a-3f43-8f47-13dbf29b6ca7
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Boot ID:                    d9fe58fb-ac8e-4430-9563-1b3e9fd35ffd
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Kernel Version:             5.10.207
	I1014 08:47:24.941933   15224 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Operating System:           linux
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Architecture:               amd64
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Container Runtime Version:  docker://27.3.1
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Kubelet Version:            v1.31.1
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Kube-Proxy Version:         v1.31.1
	I1014 08:47:24.941933   15224 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I1014 08:47:24.941933   15224 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I1014 08:47:24.941933   15224 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I1014 08:47:24.941933   15224 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I1014 08:47:24.941933   15224 command_runner.go:130] >   kube-system                 kindnet-5rqxq       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I1014 08:47:24.941933   15224 command_runner.go:130] >   kube-system                 kube-proxy-n6txs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I1014 08:47:24.941933   15224 command_runner.go:130] > Allocated resources:
	I1014 08:47:24.941933   15224 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Resource           Requests   Limits
	I1014 08:47:24.941933   15224 command_runner.go:130] >   --------           --------   ------
	I1014 08:47:24.941933   15224 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I1014 08:47:24.941933   15224 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I1014 08:47:24.941933   15224 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I1014 08:47:24.941933   15224 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I1014 08:47:24.941933   15224 command_runner.go:130] > Events:
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I1014 08:47:24.941933   15224 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  Starting                 5m46s                  kube-proxy       
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-671000-m03 status is now: NodeHasSufficientMemory
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-671000-m03 status is now: NodeHasNoDiskPressure
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-671000-m03 status is now: NodeHasSufficientPID
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-671000-m03 status is now: NodeReady
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  Starting                 5m50s                  kubelet          Starting kubelet.
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m50s (x2 over 5m50s)  kubelet          Node multinode-671000-m03 status is now: NodeHasSufficientMemory
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m50s (x2 over 5m50s)  kubelet          Node multinode-671000-m03 status is now: NodeHasNoDiskPressure
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m50s (x2 over 5m50s)  kubelet          Node multinode-671000-m03 status is now: NodeHasSufficientPID
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m50s                  kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  RegisteredNode           5m45s                  node-controller  Node multinode-671000-m03 event: Registered Node multinode-671000-m03 in Controller
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  NodeReady                5m31s                  kubelet          Node multinode-671000-m03 status is now: NodeReady
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  NodeNotReady             3m55s                  node-controller  Node multinode-671000-m03 status is now: NodeNotReady
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  RegisteredNode           66s                    node-controller  Node multinode-671000-m03 event: Registered Node multinode-671000-m03 in Controller
	I1014 08:47:24.952935   15224 logs.go:123] Gathering logs for kube-scheduler [661e75bbf6b4] ...
	I1014 08:47:24.952935   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 661e75bbf6b4"
	I1014 08:47:24.987868   15224 command_runner.go:130] ! I1014 15:22:34.688194       1 serving.go:386] Generated self-signed cert in-memory
	I1014 08:47:24.987962   15224 command_runner.go:130] ! W1014 15:22:36.199586       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I1014 08:47:24.988034   15224 command_runner.go:130] ! W1014 15:22:36.199661       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I1014 08:47:24.988034   15224 command_runner.go:130] ! W1014 15:22:36.199675       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	I1014 08:47:24.988034   15224 command_runner.go:130] ! W1014 15:22:36.199681       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 08:47:24.988120   15224 command_runner.go:130] ! I1014 15:22:36.288536       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1014 08:47:24.988120   15224 command_runner.go:130] ! I1014 15:22:36.288649       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:24.988120   15224 command_runner.go:130] ! I1014 15:22:36.292628       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1014 08:47:24.988120   15224 command_runner.go:130] ! I1014 15:22:36.292942       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 08:47:24.988218   15224 command_runner.go:130] ! I1014 15:22:36.293038       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 08:47:24.988218   15224 command_runner.go:130] ! I1014 15:22:36.293102       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 08:47:24.988218   15224 command_runner.go:130] ! W1014 15:22:36.298034       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1014 08:47:24.988344   15224 command_runner.go:130] ! E1014 15:22:36.298090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.988526   15224 command_runner.go:130] ! W1014 15:22:36.298377       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:24.988594   15224 command_runner.go:130] ! E1014 15:22:36.298420       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.988645   15224 command_runner.go:130] ! W1014 15:22:36.298587       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I1014 08:47:24.988645   15224 command_runner.go:130] ! E1014 15:22:36.298642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.988738   15224 command_runner.go:130] ! W1014 15:22:36.298730       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1014 08:47:24.988771   15224 command_runner.go:130] ! E1014 15:22:36.298855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.988771   15224 command_runner.go:130] ! W1014 15:22:36.299272       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1014 08:47:24.988771   15224 command_runner.go:130] ! E1014 15:22:36.299314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.988771   15224 command_runner.go:130] ! W1014 15:22:36.299416       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1014 08:47:24.988771   15224 command_runner.go:130] ! E1014 15:22:36.299618       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.988771   15224 command_runner.go:130] ! W1014 15:22:36.299693       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I1014 08:47:24.988771   15224 command_runner.go:130] ! E1014 15:22:36.299710       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.988771   15224 command_runner.go:130] ! W1014 15:22:36.299857       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1014 08:47:24.988771   15224 command_runner.go:130] ! E1014 15:22:36.299920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.988771   15224 command_runner.go:130] ! W1014 15:22:36.302822       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1014 08:47:24.988771   15224 command_runner.go:130] ! E1014 15:22:36.303096       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1014 08:47:24.988771   15224 command_runner.go:130] ! W1014 15:22:36.303242       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1014 08:47:24.988771   15224 command_runner.go:130] ! E1014 15:22:36.303288       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.988771   15224 command_runner.go:130] ! W1014 15:22:36.303391       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1014 08:47:24.988771   15224 command_runner.go:130] ! E1014 15:22:36.303426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.989315   15224 command_runner.go:130] ! W1014 15:22:36.303605       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:24.989368   15224 command_runner.go:130] ! E1014 15:22:36.303643       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.989423   15224 command_runner.go:130] ! W1014 15:22:36.303739       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:24.989446   15224 command_runner.go:130] ! E1014 15:22:36.303771       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.989446   15224 command_runner.go:130] ! W1014 15:22:36.303825       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1014 08:47:24.989446   15224 command_runner.go:130] ! E1014 15:22:36.303860       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.989446   15224 command_runner.go:130] ! W1014 15:22:36.304041       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:24.989446   15224 command_runner.go:130] ! E1014 15:22:36.304079       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.989446   15224 command_runner.go:130] ! W1014 15:22:37.145637       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:24.989446   15224 command_runner.go:130] ! E1014 15:22:37.146051       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.989446   15224 command_runner.go:130] ! W1014 15:22:37.146415       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1014 08:47:24.989446   15224 command_runner.go:130] ! E1014 15:22:37.146705       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.989446   15224 command_runner.go:130] ! W1014 15:22:37.189116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1014 08:47:24.989446   15224 command_runner.go:130] ! E1014 15:22:37.189252       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.991215   15224 command_runner.go:130] ! W1014 15:22:37.205810       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:24.991215   15224 command_runner.go:130] ! E1014 15:22:37.206152       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.991215   15224 command_runner.go:130] ! W1014 15:22:37.269786       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1014 08:47:24.991215   15224 command_runner.go:130] ! E1014 15:22:37.269856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.991215   15224 command_runner.go:130] ! W1014 15:22:37.270258       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:24.991215   15224 command_runner.go:130] ! E1014 15:22:37.270286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.991215   15224 command_runner.go:130] ! W1014 15:22:37.290857       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1014 08:47:24.991215   15224 command_runner.go:130] ! E1014 15:22:37.290997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.991215   15224 command_runner.go:130] ! W1014 15:22:37.304519       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1014 08:47:24.991215   15224 command_runner.go:130] ! E1014 15:22:37.305020       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1014 08:47:24.991215   15224 command_runner.go:130] ! W1014 15:22:37.328746       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I1014 08:47:24.991215   15224 command_runner.go:130] ! E1014 15:22:37.329049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.991774   15224 command_runner.go:130] ! W1014 15:22:37.338059       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I1014 08:47:24.991859   15224 command_runner.go:130] ! E1014 15:22:37.338269       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.991859   15224 command_runner.go:130] ! W1014 15:22:37.394728       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1014 08:47:24.991859   15224 command_runner.go:130] ! E1014 15:22:37.394803       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.991949   15224 command_runner.go:130] ! W1014 15:22:37.445455       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1014 08:47:24.991949   15224 command_runner.go:130] ! E1014 15:22:37.445612       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.992036   15224 command_runner.go:130] ! W1014 15:22:37.490285       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1014 08:47:24.992036   15224 command_runner.go:130] ! E1014 15:22:37.490817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.992102   15224 command_runner.go:130] ! W1014 15:22:37.537304       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1014 08:47:24.992129   15224 command_runner.go:130] ! E1014 15:22:37.537396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.992162   15224 command_runner.go:130] ! W1014 15:22:37.713713       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:24.992162   15224 command_runner.go:130] ! E1014 15:22:37.713777       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.992162   15224 command_runner.go:130] ! I1014 15:22:39.593596       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 08:47:24.992162   15224 command_runner.go:130] ! I1014 15:43:46.388691       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1014 08:47:24.992162   15224 command_runner.go:130] ! I1014 15:43:46.388783       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1014 08:47:24.992162   15224 command_runner.go:130] ! I1014 15:43:46.389141       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 08:47:24.992162   15224 command_runner.go:130] ! E1014 15:43:46.389549       1 run.go:72] "command failed" err="finished without leader elect"
	I1014 08:47:25.003406   15224 logs.go:123] Gathering logs for kube-apiserver [a834664fc8b8] ...
	I1014 08:47:25.003406   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a834664fc8b8"
	I1014 08:47:25.034566   15224 command_runner.go:130] ! I1014 15:46:12.133612       1 options.go:228] external host was not specified, using 172.20.106.123
	I1014 08:47:25.034653   15224 command_runner.go:130] ! I1014 15:46:12.139596       1 server.go:142] Version: v1.31.1
	I1014 08:47:25.034653   15224 command_runner.go:130] ! I1014 15:46:12.140322       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:25.034653   15224 command_runner.go:130] ! I1014 15:46:13.070213       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I1014 08:47:25.034734   15224 command_runner.go:130] ! I1014 15:46:13.112422       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1014 08:47:25.034734   15224 command_runner.go:130] ! I1014 15:46:13.116622       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1014 08:47:25.034734   15224 command_runner.go:130] ! I1014 15:46:13.116890       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1014 08:47:25.034734   15224 command_runner.go:130] ! I1014 15:46:13.117611       1 instance.go:232] Using reconciler: lease
	I1014 08:47:25.034734   15224 command_runner.go:130] ! I1014 15:46:13.606403       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I1014 08:47:25.034734   15224 command_runner.go:130] ! W1014 15:46:13.606961       1 genericapiserver.go:765] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:25.034877   15224 command_runner.go:130] ! I1014 15:46:13.910757       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I1014 08:47:25.034877   15224 command_runner.go:130] ! I1014 15:46:13.911096       1 apis.go:105] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I1014 08:47:25.034877   15224 command_runner.go:130] ! I1014 15:46:14.140196       1 apis.go:105] API group "storagemigration.k8s.io" is not enabled, skipping.
	I1014 08:47:25.034877   15224 command_runner.go:130] ! I1014 15:46:14.332586       1 apis.go:105] API group "resource.k8s.io" is not enabled, skipping.
	I1014 08:47:25.034982   15224 command_runner.go:130] ! I1014 15:46:14.344695       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I1014 08:47:25.034982   15224 command_runner.go:130] ! W1014 15:46:14.344792       1 genericapiserver.go:765] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:25.034982   15224 command_runner.go:130] ! W1014 15:46:14.344802       1 genericapiserver.go:765] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:25.034982   15224 command_runner.go:130] ! I1014 15:46:14.345547       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I1014 08:47:25.034982   15224 command_runner.go:130] ! W1014 15:46:14.345645       1 genericapiserver.go:765] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:25.034982   15224 command_runner.go:130] ! I1014 15:46:14.346729       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I1014 08:47:25.035096   15224 command_runner.go:130] ! I1014 15:46:14.348142       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I1014 08:47:25.035096   15224 command_runner.go:130] ! W1014 15:46:14.348261       1 genericapiserver.go:765] Skipping API autoscaling/v2beta1 because it has no resources.
	I1014 08:47:25.035159   15224 command_runner.go:130] ! W1014 15:46:14.348272       1 genericapiserver.go:765] Skipping API autoscaling/v2beta2 because it has no resources.
	I1014 08:47:25.035186   15224 command_runner.go:130] ! I1014 15:46:14.350632       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I1014 08:47:25.035186   15224 command_runner.go:130] ! W1014 15:46:14.350741       1 genericapiserver.go:765] Skipping API batch/v1beta1 because it has no resources.
	I1014 08:47:25.035236   15224 command_runner.go:130] ! I1014 15:46:14.352378       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I1014 08:47:25.035236   15224 command_runner.go:130] ! W1014 15:46:14.352489       1 genericapiserver.go:765] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:25.035236   15224 command_runner.go:130] ! W1014 15:46:14.352501       1 genericapiserver.go:765] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:25.035236   15224 command_runner.go:130] ! I1014 15:46:14.353674       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I1014 08:47:25.035236   15224 command_runner.go:130] ! W1014 15:46:14.353813       1 genericapiserver.go:765] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:25.035236   15224 command_runner.go:130] ! W1014 15:46:14.353843       1 genericapiserver.go:765] Skipping API coordination.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:25.035342   15224 command_runner.go:130] ! I1014 15:46:14.355117       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I1014 08:47:25.035372   15224 command_runner.go:130] ! W1014 15:46:14.355256       1 genericapiserver.go:765] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:25.035372   15224 command_runner.go:130] ! I1014 15:46:14.358401       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I1014 08:47:25.035372   15224 command_runner.go:130] ! W1014 15:46:14.358517       1 genericapiserver.go:765] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:25.035372   15224 command_runner.go:130] ! W1014 15:46:14.358528       1 genericapiserver.go:765] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:25.035372   15224 command_runner.go:130] ! I1014 15:46:14.359534       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I1014 08:47:25.035372   15224 command_runner.go:130] ! W1014 15:46:14.359632       1 genericapiserver.go:765] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:25.035447   15224 command_runner.go:130] ! W1014 15:46:14.359643       1 genericapiserver.go:765] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:25.035447   15224 command_runner.go:130] ! I1014 15:46:14.360836       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I1014 08:47:25.035474   15224 command_runner.go:130] ! W1014 15:46:14.360942       1 genericapiserver.go:765] Skipping API policy/v1beta1 because it has no resources.
	I1014 08:47:25.035474   15224 command_runner.go:130] ! I1014 15:46:14.363702       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I1014 08:47:25.035474   15224 command_runner.go:130] ! W1014 15:46:14.363848       1 genericapiserver.go:765] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:25.035552   15224 command_runner.go:130] ! W1014 15:46:14.363860       1 genericapiserver.go:765] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:25.035552   15224 command_runner.go:130] ! I1014 15:46:14.364685       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I1014 08:47:25.035580   15224 command_runner.go:130] ! W1014 15:46:14.364801       1 genericapiserver.go:765] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:25.035611   15224 command_runner.go:130] ! W1014 15:46:14.364812       1 genericapiserver.go:765] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:25.035611   15224 command_runner.go:130] ! I1014 15:46:14.368101       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I1014 08:47:25.035651   15224 command_runner.go:130] ! W1014 15:46:14.368216       1 genericapiserver.go:765] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:25.035651   15224 command_runner.go:130] ! W1014 15:46:14.368228       1 genericapiserver.go:765] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:25.035693   15224 command_runner.go:130] ! I1014 15:46:14.370008       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I1014 08:47:25.035693   15224 command_runner.go:130] ! I1014 15:46:14.371702       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I1014 08:47:25.035733   15224 command_runner.go:130] ! W1014 15:46:14.371808       1 genericapiserver.go:765] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I1014 08:47:25.035733   15224 command_runner.go:130] ! W1014 15:46:14.371818       1 genericapiserver.go:765] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:25.035733   15224 command_runner.go:130] ! I1014 15:46:14.376771       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I1014 08:47:25.035733   15224 command_runner.go:130] ! W1014 15:46:14.376868       1 genericapiserver.go:765] Skipping API apps/v1beta2 because it has no resources.
	I1014 08:47:25.035818   15224 command_runner.go:130] ! W1014 15:46:14.376877       1 genericapiserver.go:765] Skipping API apps/v1beta1 because it has no resources.
	I1014 08:47:25.035883   15224 command_runner.go:130] ! I1014 15:46:14.379998       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I1014 08:47:25.035883   15224 command_runner.go:130] ! W1014 15:46:14.380101       1 genericapiserver.go:765] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:25.035915   15224 command_runner.go:130] ! W1014 15:46:14.380112       1 genericapiserver.go:765] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:14.380956       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I1014 08:47:25.035915   15224 command_runner.go:130] ! W1014 15:46:14.381059       1 genericapiserver.go:765] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:14.395072       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I1014 08:47:25.035915   15224 command_runner.go:130] ! W1014 15:46:14.395116       1 genericapiserver.go:765] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.014537       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.014702       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.016123       1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.016823       1 secure_serving.go:213] Serving securely on [::]:8443
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.017426       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.018450       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.018766       1 remote_available_controller.go:411] Starting RemoteAvailability controller
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.018850       1 cache.go:32] Waiting for caches to sync for RemoteAvailability controller
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.018985       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.021391       1 controller.go:119] Starting legacy_token_tracking_controller
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.021471       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.021517       1 aggregator.go:169] waiting for initial CRD sync...
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.022050       1 dynamic_serving_content.go:135] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.022573       1 cluster_authentication_trust_controller.go:443] Starting cluster_authentication_trust_controller controller
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.022688       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.022775       1 system_namespaces_controller.go:66] Starting system namespaces controller
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.026778       1 apf_controller.go:377] Starting API Priority and Fairness config controller
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.027043       1 controller.go:78] Starting OpenAPI AggregationController
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.027942       1 customresource_discovery_controller.go:292] Starting DiscoveryController
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.029402       1 local_available_controller.go:156] Starting LocalAvailability controller
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.029447       1 cache.go:32] Waiting for caches to sync for LocalAvailability controller
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.029815       1 apiservice_controller.go:100] Starting APIServiceRegistrationController
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.029850       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.034040       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.034136       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.034690       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 08:47:25.036468   15224 command_runner.go:130] ! I1014 15:46:15.034946       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1014 08:47:25.036468   15224 command_runner.go:130] ! I1014 15:46:15.082229       1 controller.go:142] Starting OpenAPI controller
	I1014 08:47:25.036468   15224 command_runner.go:130] ! I1014 15:46:15.083838       1 controller.go:90] Starting OpenAPI V3 controller
	I1014 08:47:25.036550   15224 command_runner.go:130] ! I1014 15:46:15.083894       1 naming_controller.go:294] Starting NamingConditionController
	I1014 08:47:25.036550   15224 command_runner.go:130] ! I1014 15:46:15.086443       1 establishing_controller.go:81] Starting EstablishingController
	I1014 08:47:25.036550   15224 command_runner.go:130] ! I1014 15:46:15.087455       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I1014 08:47:25.036550   15224 command_runner.go:130] ! I1014 15:46:15.088333       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1014 08:47:25.036647   15224 command_runner.go:130] ! I1014 15:46:15.092677       1 crd_finalizer.go:269] Starting CRDFinalizer
	I1014 08:47:25.036647   15224 command_runner.go:130] ! I1014 15:46:15.212597       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1014 08:47:25.036647   15224 command_runner.go:130] ! I1014 15:46:15.212691       1 policy_source.go:224] refreshing policies
	I1014 08:47:25.036718   15224 command_runner.go:130] ! I1014 15:46:15.221529       1 shared_informer.go:320] Caches are synced for configmaps
	I1014 08:47:25.036745   15224 command_runner.go:130] ! I1014 15:46:15.226910       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1014 08:47:25.036745   15224 command_runner.go:130] ! I1014 15:46:15.227013       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:15.229937       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:15.231898       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:15.233234       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:15.234375       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:15.235151       1 aggregator.go:171] initial CRD sync complete...
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:15.235400       1 autoregister_controller.go:144] Starting autoregister controller
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:15.235712       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:15.235936       1 cache.go:39] Caches are synced for autoregister controller
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:15.255261       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:15.256039       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:15.271561       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:15.319091       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:16.036564       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 08:47:25.036778   15224 command_runner.go:130] ! W1014 15:46:16.558489       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.20.100.167 172.20.106.123]
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:16.560272       1 controller.go:615] quota admission added evaluator for: endpoints
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:16.573015       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:18.229365       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:18.748102       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:18.793266       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:18.985788       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:19.024530       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 08:47:25.036778   15224 command_runner.go:130] ! W1014 15:46:36.563040       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.20.106.123]
	I1014 08:47:25.044949   15224 logs.go:123] Gathering logs for kube-scheduler [d428685276e1] ...
	I1014 08:47:25.044949   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d428685276e1"
	I1014 08:47:25.077179   15224 command_runner.go:130] ! I1014 15:46:12.515594       1 serving.go:386] Generated self-signed cert in-memory
	I1014 08:47:25.077179   15224 command_runner.go:130] ! W1014 15:46:15.152686       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I1014 08:47:25.077179   15224 command_runner.go:130] ! W1014 15:46:15.152818       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I1014 08:47:25.077398   15224 command_runner.go:130] ! W1014 15:46:15.152851       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	I1014 08:47:25.077398   15224 command_runner.go:130] ! W1014 15:46:15.153007       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 08:47:25.077398   15224 command_runner.go:130] ! I1014 15:46:15.250163       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1014 08:47:25.077398   15224 command_runner.go:130] ! I1014 15:46:15.250420       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:25.077480   15224 command_runner.go:130] ! I1014 15:46:15.258344       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1014 08:47:25.077480   15224 command_runner.go:130] ! I1014 15:46:15.258735       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 08:47:25.077480   15224 command_runner.go:130] ! I1014 15:46:15.263966       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 08:47:25.077480   15224 command_runner.go:130] ! I1014 15:46:15.258753       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 08:47:25.077480   15224 command_runner.go:130] ! I1014 15:46:15.365145       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 08:47:27.597723   15224 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 08:47:27.628702   15224 command_runner.go:130] > 1906
	I1014 08:47:27.628702   15224 api_server.go:72] duration metric: took 1m6.952574s to wait for apiserver process to appear ...
	I1014 08:47:27.628927   15224 api_server.go:88] waiting for apiserver healthz status ...
	I1014 08:47:27.641529   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 08:47:27.666077   15224 command_runner.go:130] > a834664fc8b8
	I1014 08:47:27.666944   15224 logs.go:282] 1 containers: [a834664fc8b8]
	I1014 08:47:27.676549   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 08:47:27.700651   15224 command_runner.go:130] > 48c8492e231e
	I1014 08:47:27.700744   15224 logs.go:282] 1 containers: [48c8492e231e]
	I1014 08:47:27.711500   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 08:47:27.735082   15224 command_runner.go:130] > 5d223e2e64fc
	I1014 08:47:27.735497   15224 command_runner.go:130] > d9831e9f8ce8
	I1014 08:47:27.735578   15224 logs.go:282] 2 containers: [5d223e2e64fc d9831e9f8ce8]
	I1014 08:47:27.745090   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 08:47:27.771662   15224 command_runner.go:130] > d428685276e1
	I1014 08:47:27.771662   15224 command_runner.go:130] > 661e75bbf6b4
	I1014 08:47:27.774581   15224 logs.go:282] 2 containers: [d428685276e1 661e75bbf6b4]
	I1014 08:47:27.783397   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 08:47:27.807518   15224 command_runner.go:130] > e83db276dec3
	I1014 08:47:27.807518   15224 command_runner.go:130] > ea19428d7036
	I1014 08:47:27.807518   15224 logs.go:282] 2 containers: [e83db276dec3 ea19428d7036]
	I1014 08:47:27.815865   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 08:47:27.841755   15224 command_runner.go:130] > 8af48c446f7e
	I1014 08:47:27.841755   15224 command_runner.go:130] > 712aad669c9f
	I1014 08:47:27.841864   15224 logs.go:282] 2 containers: [8af48c446f7e 712aad669c9f]
	I1014 08:47:27.851510   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 08:47:27.877955   15224 command_runner.go:130] > bba035362eb9
	I1014 08:47:27.877955   15224 command_runner.go:130] > fcdf89a3ac8c
	I1014 08:47:27.878041   15224 logs.go:282] 2 containers: [bba035362eb9 fcdf89a3ac8c]
	I1014 08:47:27.878107   15224 logs.go:123] Gathering logs for etcd [48c8492e231e] ...
	I1014 08:47:27.878107   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48c8492e231e"
	I1014 08:47:27.908030   15224 command_runner.go:130] ! {"level":"warn","ts":"2024-10-14T15:46:11.845953Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1014 08:47:27.908369   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.848739Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.20.106.123:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.20.106.123:2380","--initial-cluster=multinode-671000=https://172.20.106.123:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.20.106.123:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.20.106.123:2380","--name=multinode-671000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I1014 08:47:27.908369   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.848857Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I1014 08:47:27.908556   15224 command_runner.go:130] ! {"level":"warn","ts":"2024-10-14T15:46:11.848886Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1014 08:47:27.908556   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.848900Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://172.20.106.123:2380"]}
	I1014 08:47:27.908556   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.848962Z","caller":"embed/etcd.go:496","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1014 08:47:27.908644   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.854418Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.20.106.123:2379"]}
	I1014 08:47:27.908698   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.857036Z","caller":"embed/etcd.go:310","msg":"starting an etcd server","etcd-version":"3.5.15","git-sha":"9a5533382","go-version":"go1.21.12","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-671000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.20.106.123:2380"],"listen-peer-urls":["https://172.20.106.123:2380"],"advertise-client-urls":["https://172.20.106.123:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.20.106.123:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"in
itial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I1014 08:47:27.908799   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.899392Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"40.66952ms"}
	I1014 08:47:27.908799   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.949173Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I1014 08:47:27.908895   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.984197Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"2dcbff584edb18cc","local-member-id":"782c48cbdf98397b","commit-index":2088}
	I1014 08:47:27.908895   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.985089Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b switched to configuration voters=()"}
	I1014 08:47:27.908895   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.987739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b became follower at term 2"}
	I1014 08:47:27.908989   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.987772Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 782c48cbdf98397b [peers: [], term: 2, commit: 2088, applied: 0, lastindex: 2088, lastterm: 2]"}
	I1014 08:47:27.908989   15224 command_runner.go:130] ! {"level":"warn","ts":"2024-10-14T15:46:12.003567Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I1014 08:47:27.909050   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.010981Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1396}
	I1014 08:47:27.909050   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.025362Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":1813}
	I1014 08:47:27.909116   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.035174Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I1014 08:47:27.909143   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.045608Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"782c48cbdf98397b","timeout":"7s"}
	I1014 08:47:27.909143   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.046705Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"782c48cbdf98397b"}
	I1014 08:47:27.909222   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.046807Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"782c48cbdf98397b","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	I1014 08:47:27.909222   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.047198Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	I1014 08:47:27.909322   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.047977Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I1014 08:47:27.909322   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.048044Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I1014 08:47:27.909322   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.048058Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I1014 08:47:27.909386   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.048736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b switched to configuration voters=(8659376223993477499)"}
	I1014 08:47:27.909445   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.049262Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2dcbff584edb18cc","local-member-id":"782c48cbdf98397b","added-peer-id":"782c48cbdf98397b","added-peer-peer-urls":["https://172.20.100.167:2380"]}
	I1014 08:47:27.909469   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.049694Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2dcbff584edb18cc","local-member-id":"782c48cbdf98397b","cluster-version":"3.5"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.049815Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.056204Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.062166Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.062574Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"782c48cbdf98397b","initial-advertise-peer-urls":["https://172.20.106.123:2380"],"listen-peer-urls":["https://172.20.106.123:2380"],"advertise-client-urls":["https://172.20.106.123:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.20.106.123:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.062654Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.062749Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"172.20.106.123:2380"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.062764Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"172.20.106.123:2380"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b is starting a new election at term 2"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b became pre-candidate at term 2"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b received MsgPreVoteResp from 782c48cbdf98397b at term 2"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b became candidate at term 3"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b received MsgVoteResp from 782c48cbdf98397b at term 3"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b became leader at term 3"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 782c48cbdf98397b elected leader 782c48cbdf98397b at term 3"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.496949Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.496902Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"782c48cbdf98397b","local-member-attributes":"{Name:multinode-671000 ClientURLs:[https://172.20.106.123:2379]}","request-path":"/0/members/782c48cbdf98397b/attributes","cluster-id":"2dcbff584edb18cc","publish-timeout":"7s"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.497822Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.499586Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.499631Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.500815Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.502392Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.20.106.123:2379"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.503879Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.505686Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I1014 08:47:27.922164   15224 logs.go:123] Gathering logs for coredns [d9831e9f8ce8] ...
	I1014 08:47:27.922164   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9831e9f8ce8"
	I1014 08:47:27.954323   15224 command_runner.go:130] > .:53
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	I1014 08:47:27.954323   15224 command_runner.go:130] > CoreDNS-1.11.3
	I1014 08:47:27.954323   15224 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 127.0.0.1:35483 - 39257 "HINFO IN 8382239991273371198.8905610076788717940. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.074337261s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.1.2:36950 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003062s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.1.2:49277 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.118118924s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.1.2:33122 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.153089702s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.1.2:44549 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.188160849s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.0.3:43390 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.0.3:59817 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000279499s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.0.3:34294 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.0002004s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.0.3:56220 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0002257s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.1.2:44291 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002098s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.1.2:42361 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.17965629s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.1.2:48756 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002923s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.1.2:53437 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000274799s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.1.2:60026 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.013560692s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.1.2:39241 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001752s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.1.2:36696 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0003084s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.1.2:51603 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001109s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.0.3:37516 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002057s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.0.3:41090 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001329s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.0.3:45330 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001397s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.0.3:46520 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000154699s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.0.3:40342 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001347s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.0.3:60385 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001518s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.0.3:56338 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001527s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.0.3:50480 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0002505s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.1.2:60726 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002297s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.1.2:44347 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002249s
	I1014 08:47:27.954900   15224 command_runner.go:130] > [INFO] 10.244.1.2:49092 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001075s
	I1014 08:47:27.954900   15224 command_runner.go:130] > [INFO] 10.244.1.2:54937 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068s
	I1014 08:47:27.954972   15224 command_runner.go:130] > [INFO] 10.244.0.3:59749 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002552s
	I1014 08:47:27.954972   15224 command_runner.go:130] > [INFO] 10.244.0.3:56008 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002499s
	I1014 08:47:27.954972   15224 command_runner.go:130] > [INFO] 10.244.0.3:60338 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001065s
	I1014 08:47:27.954972   15224 command_runner.go:130] > [INFO] 10.244.0.3:49712 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001634s
	I1014 08:47:27.954972   15224 command_runner.go:130] > [INFO] 10.244.1.2:56508 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001577s
	I1014 08:47:27.955083   15224 command_runner.go:130] > [INFO] 10.244.1.2:45387 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001545s
	I1014 08:47:27.955083   15224 command_runner.go:130] > [INFO] 10.244.1.2:39608 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001419s
	I1014 08:47:27.955083   15224 command_runner.go:130] > [INFO] 10.244.1.2:53878 - 5 "PTR IN 1.96.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0001923s
	I1014 08:47:27.955083   15224 command_runner.go:130] > [INFO] 10.244.0.3:58589 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002828s
	I1014 08:47:27.955166   15224 command_runner.go:130] > [INFO] 10.244.0.3:58608 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001609s
	I1014 08:47:27.955166   15224 command_runner.go:130] > [INFO] 10.244.0.3:52599 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000633s
	I1014 08:47:27.955166   15224 command_runner.go:130] > [INFO] 10.244.0.3:58233 - 5 "PTR IN 1.96.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0001058s
	I1014 08:47:27.955166   15224 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I1014 08:47:27.955229   15224 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I1014 08:47:27.957882   15224 logs.go:123] Gathering logs for kube-scheduler [d428685276e1] ...
	I1014 08:47:27.957882   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d428685276e1"
	I1014 08:47:27.985124   15224 command_runner.go:130] ! I1014 15:46:12.515594       1 serving.go:386] Generated self-signed cert in-memory
	I1014 08:47:27.985617   15224 command_runner.go:130] ! W1014 15:46:15.152686       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I1014 08:47:27.985715   15224 command_runner.go:130] ! W1014 15:46:15.152818       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I1014 08:47:27.985892   15224 command_runner.go:130] ! W1014 15:46:15.152851       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	I1014 08:47:27.986047   15224 command_runner.go:130] ! W1014 15:46:15.153007       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 08:47:27.986047   15224 command_runner.go:130] ! I1014 15:46:15.250163       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1014 08:47:27.986047   15224 command_runner.go:130] ! I1014 15:46:15.250420       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:27.986144   15224 command_runner.go:130] ! I1014 15:46:15.258344       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1014 08:47:27.986190   15224 command_runner.go:130] ! I1014 15:46:15.258735       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 08:47:27.986190   15224 command_runner.go:130] ! I1014 15:46:15.263966       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 08:47:27.986190   15224 command_runner.go:130] ! I1014 15:46:15.258753       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 08:47:27.986250   15224 command_runner.go:130] ! I1014 15:46:15.365145       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 08:47:27.989677   15224 logs.go:123] Gathering logs for kube-proxy [ea19428d7036] ...
	I1014 08:47:27.989741   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea19428d7036"
	I1014 08:47:28.014957   15224 command_runner.go:130] ! I1014 15:22:47.466748       1 server_linux.go:66] "Using iptables proxy"
	I1014 08:47:28.015020   15224 command_runner.go:130] ! E1014 15:22:47.511018       1 proxier.go:734] "Error cleaning up nftables rules" err=<
	I1014 08:47:28.015020   15224 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I1014 08:47:28.015020   15224 command_runner.go:130] ! 	add table ip kube-proxy
	I1014 08:47:28.015097   15224 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I1014 08:47:28.015097   15224 command_runner.go:130] !  >
	I1014 08:47:28.015097   15224 command_runner.go:130] ! E1014 15:22:47.546236       1 proxier.go:734] "Error cleaning up nftables rules" err=<
	I1014 08:47:28.015097   15224 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I1014 08:47:28.015097   15224 command_runner.go:130] ! 	add table ip6 kube-proxy
	I1014 08:47:28.015097   15224 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I1014 08:47:28.015172   15224 command_runner.go:130] !  >
	I1014 08:47:28.015172   15224 command_runner.go:130] ! I1014 15:22:47.606437       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.20.100.167"]
	I1014 08:47:28.015201   15224 command_runner.go:130] ! E1014 15:22:47.606505       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 08:47:28.015233   15224 command_runner.go:130] ! I1014 15:22:47.788755       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 08:47:28.015233   15224 command_runner.go:130] ! I1014 15:22:47.788820       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 08:47:28.015233   15224 command_runner.go:130] ! I1014 15:22:47.788852       1 server_linux.go:169] "Using iptables Proxier"
	I1014 08:47:28.015233   15224 command_runner.go:130] ! I1014 15:22:47.807050       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 08:47:28.015233   15224 command_runner.go:130] ! I1014 15:22:47.807424       1 server.go:483] "Version info" version="v1.31.1"
	I1014 08:47:28.015233   15224 command_runner.go:130] ! I1014 15:22:47.807440       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:28.015233   15224 command_runner.go:130] ! I1014 15:22:47.810361       1 config.go:199] "Starting service config controller"
	I1014 08:47:28.015233   15224 command_runner.go:130] ! I1014 15:22:47.810416       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 08:47:28.015233   15224 command_runner.go:130] ! I1014 15:22:47.810600       1 config.go:105] "Starting endpoint slice config controller"
	I1014 08:47:28.015233   15224 command_runner.go:130] ! I1014 15:22:47.810629       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 08:47:28.015233   15224 command_runner.go:130] ! I1014 15:22:47.811297       1 config.go:328] "Starting node config controller"
	I1014 08:47:28.015233   15224 command_runner.go:130] ! I1014 15:22:47.811309       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 08:47:28.015233   15224 command_runner.go:130] ! I1014 15:22:47.911443       1 shared_informer.go:320] Caches are synced for node config
	I1014 08:47:28.015233   15224 command_runner.go:130] ! I1014 15:22:47.911791       1 shared_informer.go:320] Caches are synced for service config
	I1014 08:47:28.015233   15224 command_runner.go:130] ! I1014 15:22:47.911866       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 08:47:28.018533   15224 logs.go:123] Gathering logs for kube-controller-manager [8af48c446f7e] ...
	I1014 08:47:28.018533   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af48c446f7e"
	I1014 08:47:28.046117   15224 command_runner.go:130] ! I1014 15:46:12.989235       1 serving.go:386] Generated self-signed cert in-memory
	I1014 08:47:28.046922   15224 command_runner.go:130] ! I1014 15:46:13.820617       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I1014 08:47:28.046922   15224 command_runner.go:130] ! I1014 15:46:13.820897       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:28.046922   15224 command_runner.go:130] ! I1014 15:46:13.823101       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1014 08:47:28.046922   15224 command_runner.go:130] ! I1014 15:46:13.823494       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 08:47:28.047120   15224 command_runner.go:130] ! I1014 15:46:13.824132       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1014 08:47:28.047295   15224 command_runner.go:130] ! I1014 15:46:13.824214       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 08:47:28.047444   15224 command_runner.go:130] ! I1014 15:46:17.208145       1 controllermanager.go:797] "Started controller" controller="serviceaccount-token-controller"
	I1014 08:47:28.047444   15224 command_runner.go:130] ! I1014 15:46:17.211496       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I1014 08:47:28.050136   15224 command_runner.go:130] ! I1014 15:46:17.268813       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I1014 08:47:28.051196   15224 command_runner.go:130] ! I1014 15:46:17.269727       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I1014 08:47:28.051196   15224 command_runner.go:130] ! I1014 15:46:17.270864       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I1014 08:47:28.051196   15224 command_runner.go:130] ! I1014 15:46:17.271094       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I1014 08:47:28.051315   15224 command_runner.go:130] ! I1014 15:46:17.271857       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I1014 08:47:28.051315   15224 command_runner.go:130] ! I1014 15:46:17.271962       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I1014 08:47:28.051348   15224 command_runner.go:130] ! I1014 15:46:17.272049       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I1014 08:47:28.051348   15224 command_runner.go:130] ! I1014 15:46:17.272075       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I1014 08:47:28.051348   15224 command_runner.go:130] ! I1014 15:46:17.273540       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I1014 08:47:28.051348   15224 command_runner.go:130] ! I1014 15:46:17.274245       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I1014 08:47:28.051448   15224 command_runner.go:130] ! I1014 15:46:17.274579       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I1014 08:47:28.051448   15224 command_runner.go:130] ! I1014 15:46:17.274747       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I1014 08:47:28.051520   15224 command_runner.go:130] ! I1014 15:46:17.274772       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I1014 08:47:28.051548   15224 command_runner.go:130] ! I1014 15:46:17.275348       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I1014 08:47:28.051548   15224 command_runner.go:130] ! I1014 15:46:17.275380       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I1014 08:47:28.051609   15224 command_runner.go:130] ! I1014 15:46:17.275397       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I1014 08:47:28.051609   15224 command_runner.go:130] ! I1014 15:46:17.275571       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I1014 08:47:28.051673   15224 command_runner.go:130] ! I1014 15:46:17.275603       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I1014 08:47:28.051673   15224 command_runner.go:130] ! W1014 15:46:17.275618       1 shared_informer.go:597] resyncPeriod 13h32m18.096579392s is smaller than resyncCheckPeriod 20h55m54.648340273s and the informer has already started. Changing it to 20h55m54.648340273s
	I1014 08:47:28.051739   15224 command_runner.go:130] ! I1014 15:46:17.276096       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I1014 08:47:28.051739   15224 command_runner.go:130] ! I1014 15:46:17.276150       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I1014 08:47:28.051739   15224 command_runner.go:130] ! I1014 15:46:17.276197       1 controllermanager.go:797] "Started controller" controller="resourcequota-controller"
	I1014 08:47:28.051808   15224 command_runner.go:130] ! I1014 15:46:17.276213       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I1014 08:47:28.051808   15224 command_runner.go:130] ! I1014 15:46:17.276260       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1014 08:47:28.051808   15224 command_runner.go:130] ! I1014 15:46:17.276359       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I1014 08:47:28.051808   15224 command_runner.go:130] ! I1014 15:46:17.283642       1 controllermanager.go:797] "Started controller" controller="replicaset-controller"
	I1014 08:47:28.051874   15224 command_runner.go:130] ! I1014 15:46:17.284697       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I1014 08:47:28.051874   15224 command_runner.go:130] ! I1014 15:46:17.284913       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I1014 08:47:28.051874   15224 command_runner.go:130] ! I1014 15:46:17.288417       1 controllermanager.go:797] "Started controller" controller="cronjob-controller"
	I1014 08:47:28.051942   15224 command_runner.go:130] ! I1014 15:46:17.289073       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I1014 08:47:28.051942   15224 command_runner.go:130] ! I1014 15:46:17.289091       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I1014 08:47:28.051942   15224 command_runner.go:130] ! I1014 15:46:17.292212       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-approving-controller"
	I1014 08:47:28.051942   15224 command_runner.go:130] ! I1014 15:46:17.292573       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I1014 08:47:28.051942   15224 command_runner.go:130] ! I1014 15:46:17.292591       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I1014 08:47:28.051942   15224 command_runner.go:130] ! I1014 15:46:17.295276       1 controllermanager.go:797] "Started controller" controller="clusterrole-aggregation-controller"
	I1014 08:47:28.052030   15224 command_runner.go:130] ! I1014 15:46:17.295785       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I1014 08:47:28.052030   15224 command_runner.go:130] ! I1014 15:46:17.298756       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I1014 08:47:28.052030   15224 command_runner.go:130] ! I1014 15:46:17.299107       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I1014 08:47:28.052097   15224 command_runner.go:130] ! I1014 15:46:17.299997       1 controllermanager.go:797] "Started controller" controller="endpointslice-mirroring-controller"
	I1014 08:47:28.052136   15224 command_runner.go:130] ! I1014 15:46:17.302040       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I1014 08:47:28.052161   15224 command_runner.go:130] ! I1014 15:46:17.302058       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I1014 08:47:28.052161   15224 command_runner.go:130] ! I1014 15:46:17.305668       1 controllermanager.go:797] "Started controller" controller="replicationcontroller-controller"
	I1014 08:47:28.052161   15224 command_runner.go:130] ! I1014 15:46:17.308801       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I1014 08:47:28.052226   15224 command_runner.go:130] ! I1014 15:46:17.308819       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I1014 08:47:28.052226   15224 command_runner.go:130] ! I1014 15:46:17.318320       1 shared_informer.go:320] Caches are synced for tokens
	I1014 08:47:28.052226   15224 command_runner.go:130] ! I1014 15:46:17.329856       1 controllermanager.go:797] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I1014 08:47:28.052226   15224 command_runner.go:130] ! I1014 15:46:17.330990       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I1014 08:47:28.052226   15224 command_runner.go:130] ! I1014 15:46:17.331395       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I1014 08:47:28.052226   15224 command_runner.go:130] ! I1014 15:46:17.345566       1 controllermanager.go:797] "Started controller" controller="disruption-controller"
	I1014 08:47:28.052226   15224 command_runner.go:130] ! I1014 15:46:17.345806       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I1014 08:47:28.052226   15224 command_runner.go:130] ! I1014 15:46:17.345841       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I1014 08:47:28.052226   15224 command_runner.go:130] ! I1014 15:46:17.345937       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I1014 08:47:28.052226   15224 command_runner.go:130] ! E1014 15:46:17.350088       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I1014 08:47:28.052381   15224 command_runner.go:130] ! I1014 15:46:17.350237       1 controllermanager.go:775] "Warning: skipping controller" controller="service-lb-controller"
	I1014 08:47:28.052381   15224 command_runner.go:130] ! I1014 15:46:17.350277       1 controllermanager.go:775] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I1014 08:47:28.052381   15224 command_runner.go:130] ! I1014 15:46:17.359040       1 controllermanager.go:797] "Started controller" controller="endpoints-controller"
	I1014 08:47:28.052381   15224 command_runner.go:130] ! I1014 15:46:17.360243       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I1014 08:47:28.052381   15224 command_runner.go:130] ! I1014 15:46:17.360265       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I1014 08:47:28.052473   15224 command_runner.go:130] ! I1014 15:46:17.362115       1 controllermanager.go:797] "Started controller" controller="pod-garbage-collector-controller"
	I1014 08:47:28.052473   15224 command_runner.go:130] ! I1014 15:46:17.362235       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I1014 08:47:28.052473   15224 command_runner.go:130] ! I1014 15:46:17.362245       1 shared_informer.go:313] Waiting for caches to sync for GC
	I1014 08:47:28.052473   15224 command_runner.go:130] ! I1014 15:46:17.364537       1 controllermanager.go:797] "Started controller" controller="serviceaccount-controller"
	I1014 08:47:28.052541   15224 command_runner.go:130] ! I1014 15:46:17.364725       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I1014 08:47:28.052541   15224 command_runner.go:130] ! I1014 15:46:17.364738       1 shared_informer.go:313] Waiting for caches to sync for service account
	I1014 08:47:28.052609   15224 command_runner.go:130] ! I1014 15:46:17.367152       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I1014 08:47:28.052609   15224 command_runner.go:130] ! I1014 15:46:17.367373       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I1014 08:47:28.052609   15224 command_runner.go:130] ! I1014 15:46:17.369619       1 controllermanager.go:797] "Started controller" controller="bootstrap-signer-controller"
	I1014 08:47:28.052609   15224 command_runner.go:130] ! I1014 15:46:17.370097       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I1014 08:47:28.052609   15224 command_runner.go:130] ! I1014 15:46:17.373109       1 controllermanager.go:797] "Started controller" controller="token-cleaner-controller"
	I1014 08:47:28.052673   15224 command_runner.go:130] ! I1014 15:46:17.373475       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I1014 08:47:28.052673   15224 command_runner.go:130] ! I1014 15:46:17.373486       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I1014 08:47:28.052741   15224 command_runner.go:130] ! I1014 15:46:17.373493       1 shared_informer.go:320] Caches are synced for token_cleaner
	I1014 08:47:28.052741   15224 command_runner.go:130] ! I1014 15:46:17.375506       1 controllermanager.go:797] "Started controller" controller="ttl-after-finished-controller"
	I1014 08:47:28.052741   15224 command_runner.go:130] ! I1014 15:46:17.375684       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I1014 08:47:28.052741   15224 command_runner.go:130] ! I1014 15:46:17.375694       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I1014 08:47:28.052807   15224 command_runner.go:130] ! I1014 15:46:17.379552       1 controllermanager.go:797] "Started controller" controller="ephemeral-volume-controller"
	I1014 08:47:28.052807   15224 command_runner.go:130] ! I1014 15:46:17.380063       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I1014 08:47:28.052807   15224 command_runner.go:130] ! I1014 15:46:17.380270       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I1014 08:47:28.052807   15224 command_runner.go:130] ! I1014 15:46:17.413079       1 controllermanager.go:797] "Started controller" controller="namespace-controller"
	I1014 08:47:28.052875   15224 command_runner.go:130] ! I1014 15:46:17.413676       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I1014 08:47:28.052875   15224 command_runner.go:130] ! I1014 15:46:17.415689       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I1014 08:47:28.052875   15224 command_runner.go:130] ! I1014 15:46:17.418729       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I1014 08:47:28.052875   15224 command_runner.go:130] ! I1014 15:46:17.418858       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I1014 08:47:28.052948   15224 command_runner.go:130] ! I1014 15:46:17.418983       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:28.052948   15224 command_runner.go:130] ! I1014 15:46:17.420448       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-signing-controller"
	I1014 08:47:28.052948   15224 command_runner.go:130] ! I1014 15:46:17.420573       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I1014 08:47:28.052948   15224 command_runner.go:130] ! I1014 15:46:17.420658       1 controllermanager.go:775] "Warning: skipping controller" controller="node-route-controller"
	I1014 08:47:28.053081   15224 command_runner.go:130] ! I1014 15:46:17.420878       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I1014 08:47:28.053081   15224 command_runner.go:130] ! I1014 15:46:17.422022       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I1014 08:47:28.053081   15224 command_runner.go:130] ! I1014 15:46:17.422169       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I1014 08:47:28.053081   15224 command_runner.go:130] ! I1014 15:46:17.422636       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I1014 08:47:28.053188   15224 command_runner.go:130] ! I1014 15:46:17.425521       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:28.053188   15224 command_runner.go:130] ! I1014 15:46:17.425557       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I1014 08:47:28.053188   15224 command_runner.go:130] ! I1014 15:46:17.425747       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I1014 08:47:28.053250   15224 command_runner.go:130] ! I1014 15:46:17.425569       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:28.053250   15224 command_runner.go:130] ! I1014 15:46:17.425577       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:28.053283   15224 command_runner.go:130] ! E1014 15:46:17.429609       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.429771       1 controllermanager.go:775] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.432720       1 controllermanager.go:797] "Started controller" controller="persistentvolume-attach-detach-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.433242       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.433509       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.437867       1 controllermanager.go:797] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.438432       1 pvc_protection_controller.go:105] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.438754       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.466996       1 controllermanager.go:797] "Started controller" controller="garbage-collector-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.467178       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.467191       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.467211       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.513974       1 controllermanager.go:797] "Started controller" controller="daemonset-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.514092       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.514103       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.612272       1 controllermanager.go:797] "Started controller" controller="deployment-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.612390       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.612405       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.715625       1 controllermanager.go:797] "Started controller" controller="ttl-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.718491       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.718512       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.762259       1 node_lifecycle_controller.go:430] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.762792       1 controllermanager.go:797] "Started controller" controller="node-lifecycle-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.763108       1 node_lifecycle_controller.go:464] "Sending events to api server" logger="node-lifecycle-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.763488       1 node_lifecycle_controller.go:475] "Starting node controller" logger="node-lifecycle-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.763636       1 shared_informer.go:313] Waiting for caches to sync for taint
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.815269       1 controllermanager.go:797] "Started controller" controller="persistentvolume-expander-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.815926       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.815820       1 expand_controller.go:328] "Starting expand controller" logger="persistentvolume-expander-controller"
	I1014 08:47:28.053859   15224 command_runner.go:130] ! I1014 15:46:17.815981       1 shared_informer.go:313] Waiting for caches to sync for expand
	I1014 08:47:28.053904   15224 command_runner.go:130] ! I1014 15:46:17.865803       1 controllermanager.go:797] "Started controller" controller="taint-eviction-controller"
	I1014 08:47:28.053904   15224 command_runner.go:130] ! I1014 15:46:17.865833       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I1014 08:47:28.053904   15224 command_runner.go:130] ! I1014 15:46:17.865908       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I1014 08:47:28.053904   15224 command_runner.go:130] ! I1014 15:46:17.865945       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I1014 08:47:28.053966   15224 command_runner.go:130] ! I1014 15:46:17.865986       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I1014 08:47:28.053966   15224 command_runner.go:130] ! I1014 15:46:17.923932       1 controllermanager.go:797] "Started controller" controller="job-controller"
	I1014 08:47:28.053966   15224 command_runner.go:130] ! I1014 15:46:17.924153       1 job_controller.go:226] "Starting job controller" logger="job-controller"
	I1014 08:47:28.053966   15224 command_runner.go:130] ! I1014 15:46:17.924184       1 shared_informer.go:313] Waiting for caches to sync for job
	I1014 08:47:28.054040   15224 command_runner.go:130] ! I1014 15:46:17.978728       1 controllermanager.go:797] "Started controller" controller="root-ca-certificate-publisher-controller"
	I1014 08:47:28.054040   15224 command_runner.go:130] ! I1014 15:46:17.978796       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I1014 08:47:28.054040   15224 command_runner.go:130] ! I1014 15:46:17.978809       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I1014 08:47:28.054094   15224 command_runner.go:130] ! I1014 15:46:18.018003       1 controllermanager.go:797] "Started controller" controller="endpointslice-controller"
	I1014 08:47:28.054117   15224 command_runner.go:130] ! I1014 15:46:18.018177       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I1014 08:47:28.054117   15224 command_runner.go:130] ! I1014 15:46:18.018192       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I1014 08:47:28.054182   15224 command_runner.go:130] ! I1014 15:46:18.077409       1 controllermanager.go:797] "Started controller" controller="statefulset-controller"
	I1014 08:47:28.054182   15224 command_runner.go:130] ! I1014 15:46:18.078007       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I1014 08:47:28.054182   15224 command_runner.go:130] ! I1014 15:46:18.078026       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I1014 08:47:28.054261   15224 command_runner.go:130] ! I1014 15:46:18.245465       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I1014 08:47:28.054282   15224 command_runner.go:130] ! I1014 15:46:18.246368       1 controllermanager.go:797] "Started controller" controller="node-ipam-controller"
	I1014 08:47:28.054282   15224 command_runner.go:130] ! I1014 15:46:18.246712       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I1014 08:47:28.054282   15224 command_runner.go:130] ! I1014 15:46:18.246910       1 shared_informer.go:313] Waiting for caches to sync for node
	I1014 08:47:28.054282   15224 command_runner.go:130] ! I1014 15:46:18.264869       1 controllermanager.go:797] "Started controller" controller="persistentvolume-binder-controller"
	I1014 08:47:28.054342   15224 command_runner.go:130] ! I1014 15:46:18.264984       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I1014 08:47:28.054367   15224 command_runner.go:130] ! I1014 15:46:18.266232       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I1014 08:47:28.054367   15224 command_runner.go:130] ! I1014 15:46:18.321121       1 controllermanager.go:797] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I1014 08:47:28.054419   15224 command_runner.go:130] ! I1014 15:46:18.323482       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I1014 08:47:28.054442   15224 command_runner.go:130] ! I1014 15:46:18.323903       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I1014 08:47:28.054442   15224 command_runner.go:130] ! I1014 15:46:18.431796       1 controllermanager.go:797] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.431873       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.465851       1 controllermanager.go:797] "Started controller" controller="persistentvolume-protection-controller"
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.468767       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.469028       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.485571       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.534720       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.539015       1 shared_informer.go:320] Caches are synced for PVC protection
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.541399       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000\" does not exist"
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.541615       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m02\" does not exist"
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.549102       1 shared_informer.go:320] Caches are synced for TTL
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.549549       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m03\" does not exist"
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.550590       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.551387       1 shared_informer.go:320] Caches are synced for node
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.554673       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.557592       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.558471       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.558669       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.559066       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.559166       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.559144       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.560823       1 shared_informer.go:320] Caches are synced for endpoint
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.563147       1 shared_informer.go:320] Caches are synced for GC
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.566072       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.566447       1 shared_informer.go:320] Caches are synced for persistent volume
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.566267       1 shared_informer.go:320] Caches are synced for service account
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.570369       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.570522       1 shared_informer.go:320] Caches are synced for PV protection
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.577368       1 shared_informer.go:320] Caches are synced for TTL after finished
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.580187       1 shared_informer.go:320] Caches are synced for crt configmap
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.580534       1 shared_informer.go:320] Caches are synced for ephemeral
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.585372       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.593972       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.595014       1 shared_informer.go:320] Caches are synced for cronjob
	I1014 08:47:28.055031   15224 command_runner.go:130] ! I1014 15:46:18.600012       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1014 08:47:28.055031   15224 command_runner.go:130] ! I1014 15:46:18.602930       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1014 08:47:28.055091   15224 command_runner.go:130] ! I1014 15:46:18.609680       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1014 08:47:28.055091   15224 command_runner.go:130] ! I1014 15:46:18.613447       1 shared_informer.go:320] Caches are synced for deployment
	I1014 08:47:28.055091   15224 command_runner.go:130] ! I1014 15:46:18.616246       1 shared_informer.go:320] Caches are synced for expand
	I1014 08:47:28.055091   15224 command_runner.go:130] ! I1014 15:46:18.616739       1 shared_informer.go:320] Caches are synced for namespace
	I1014 08:47:28.055091   15224 command_runner.go:130] ! I1014 15:46:18.618534       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1014 08:47:28.055091   15224 command_runner.go:130] ! I1014 15:46:18.625249       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1014 08:47:28.055188   15224 command_runner.go:130] ! I1014 15:46:18.630423       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1014 08:47:28.055221   15224 command_runner.go:130] ! I1014 15:46:18.632938       1 shared_informer.go:320] Caches are synced for HPA
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.633193       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.634381       1 shared_informer.go:320] Caches are synced for job
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.634623       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.634920       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.649619       1 shared_informer.go:320] Caches are synced for disruption
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.668155       1 shared_informer.go:320] Caches are synced for taint
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.670026       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.680357       1 shared_informer.go:320] Caches are synced for stateful set
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.700582       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.708812       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.714134       1 shared_informer.go:320] Caches are synced for daemon sets
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.718536       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.718841       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000-m02"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.719036       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000-m03"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.721210       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="135.448763ms"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.721514       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="40.1µs"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.721809       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="136.173363ms"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.722033       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.5µs"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.722234       1 node_lifecycle_controller.go:1036] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.777385       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.786812       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.833914       1 shared_informer.go:320] Caches are synced for attach detach
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:19.252391       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:19.267855       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:19.268119       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:59.871635       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:59.892163       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:47:03.736416       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:47:13.821153       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:47:20.979721       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="114.5µs"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:47:22.061324       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.05527ms"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:47:22.062652       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.8µs"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:47:22.098955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="28.422114ms"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:47:22.099794       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="313.699µs"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:47:23.920002       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.072615   15224 logs.go:123] Gathering logs for kindnet [fcdf89a3ac8c] ...
	I1014 08:47:28.072615   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcdf89a3ac8c"
	I1014 08:47:28.105389   15224 command_runner.go:130] ! I1014 15:32:44.862261       1 main.go:300] handling current node
	I1014 08:47:28.105857   15224 command_runner.go:130] ! I1014 15:32:44.862301       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.105857   15224 command_runner.go:130] ! I1014 15:32:44.862313       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.105857   15224 command_runner.go:130] ! I1014 15:32:44.862605       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.105857   15224 command_runner.go:130] ! I1014 15:32:44.862636       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.105857   15224 command_runner.go:130] ! I1014 15:32:54.862103       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.105857   15224 command_runner.go:130] ! I1014 15:32:54.862232       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.105857   15224 command_runner.go:130] ! I1014 15:32:54.862979       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.105857   15224 command_runner.go:130] ! I1014 15:32:54.863013       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.105857   15224 command_runner.go:130] ! I1014 15:32:54.863219       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.105857   15224 command_runner.go:130] ! I1014 15:32:54.863233       1 main.go:300] handling current node
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:04.864377       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:04.864510       1 main.go:300] handling current node
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:04.864534       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:04.864544       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:04.864795       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:04.864807       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:14.870098       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:14.870279       1 main.go:300] handling current node
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:14.870319       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:14.870394       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:14.872221       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:14.872265       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:24.862168       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:24.862234       1 main.go:300] handling current node
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:24.862290       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:24.862303       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:24.862799       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:24.862950       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:34.870712       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:34.870952       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:34.871749       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:34.871848       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:34.872312       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:34.872409       1 main.go:300] handling current node
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:44.868271       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:44.868442       1 main.go:300] handling current node
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:44.868482       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:44.868509       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:44.869165       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:44.869252       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:54.862162       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:54.862365       1 main.go:300] handling current node
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:54.862404       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:54.862429       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:33:54.862766       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:33:54.862800       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:04.870860       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:04.870993       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:04.871751       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:04.871830       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:04.872365       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:04.872444       1 main.go:300] handling current node
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:14.868274       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:14.868410       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:14.869151       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:14.869244       1 main.go:300] handling current node
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:14.869263       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:14.869271       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:24.869326       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:24.869383       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:24.870365       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:24.870464       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:24.871197       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:24.871235       1 main.go:300] handling current node
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:34.862280       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:34.862387       1 main.go:300] handling current node
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:34.862420       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:34.862440       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:34.862809       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:34.862844       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:44.870611       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:44.870703       1 main.go:300] handling current node
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:44.870732       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:44.870826       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:44.871348       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:44.871437       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:54.862260       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:54.862358       1 main.go:300] handling current node
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:54.862379       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:54.862388       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:54.862782       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:54.862862       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:04.871418       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:04.871489       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:04.872322       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:04.872416       1 main.go:300] handling current node
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:04.872437       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:04.872445       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:14.870301       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:14.870413       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:14.870922       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:14.870941       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:14.871055       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:14.871086       1 main.go:300] handling current node
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:24.870776       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:24.870814       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:24.871449       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:24.871682       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:24.872057       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:24.872149       1 main.go:300] handling current node
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:34.871155       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:34.871422       1 main.go:300] handling current node
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:34.876612       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:34.876630       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:34.876808       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:34.876817       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:44.872405       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:44.872450       1 main.go:300] handling current node
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:44.872467       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:44.872473       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:44.873120       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:44.873155       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:54.862113       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:54.862220       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:54.862608       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:54.862725       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:54.862993       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:54.863089       1 main.go:300] handling current node
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:36:04.870594       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:36:04.870634       1 main.go:300] handling current node
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:36:04.870705       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:36:04.870719       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:04.871246       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:04.871261       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:14.862194       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:14.862337       1 main.go:300] handling current node
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:14.862361       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:14.862370       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:14.863024       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:14.863053       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:24.870839       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:24.871114       1 main.go:300] handling current node
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:24.871303       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:24.871618       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:24.872052       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:24.872164       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:34.870320       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:34.870375       1 main.go:300] handling current node
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:34.870396       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:34.870404       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:34.870774       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:34.870810       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:44.864305       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:44.864530       1 main.go:300] handling current node
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:44.864616       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:44.864683       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:44.865206       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:44.865241       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:54.862701       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:54.862834       1 main.go:300] handling current node
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:54.862940       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:54.863054       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:54.864321       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:54.864397       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:04.863761       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:04.863854       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:04.864505       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:04.864638       1 main.go:300] handling current node
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:04.864656       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:04.864664       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:14.866293       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:14.866653       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:14.867034       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:14.867067       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:14.867179       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:14.867247       1 main.go:300] handling current node
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:24.867969       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:24.868019       1 main.go:300] handling current node
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:24.868036       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:24.868043       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:24.868511       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:24.868549       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:34.863786       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:34.864224       1 main.go:300] handling current node
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:34.864384       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:34.864448       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:34.864771       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:34.864865       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:44.871310       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:44.871352       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:44.871803       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:44.871837       1 main.go:300] handling current node
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:44.871852       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:44.871859       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:54.862573       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:37:54.862694       1 main.go:300] handling current node
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:37:54.862714       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:37:54.862723       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:37:54.863288       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:37:54.863364       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:04.872124       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:04.872285       1 main.go:300] handling current node
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:04.872330       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:04.872343       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:04.873184       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:04.873352       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:14.863654       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:14.863788       1 main.go:300] handling current node
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:14.863812       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:14.863822       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:14.864488       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:14.864585       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:24.868537       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:24.868643       1 main.go:300] handling current node
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:24.868664       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:24.868672       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:24.869258       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:24.869347       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:34.864233       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:34.864469       1 main.go:300] handling current node
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:34.864497       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:34.864509       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:34.865023       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:34.865061       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:44.870754       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:44.870859       1 main.go:300] handling current node
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:44.870919       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:44.870931       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:44.871124       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:44.871155       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:54.862849       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:54.863008       1 main.go:300] handling current node
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:54.863029       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:54.863040       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:54.863313       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:54.863343       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:04.861865       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:04.862353       1 main.go:300] handling current node
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:04.862819       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:04.863053       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:04.863648       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:04.865127       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:14.870473       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:14.870526       1 main.go:300] handling current node
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:14.870544       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:14.870551       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:14.871123       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:14.871161       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:24.862264       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:24.862304       1 main.go:300] handling current node
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:24.862323       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:24.862331       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:24.863326       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:24.863417       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:34.862868       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:34.863041       1 main.go:300] handling current node
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:34.863063       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:34.863072       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:34.863370       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:34.863460       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:44.872051       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:44.872175       1 main.go:300] handling current node
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:44.872198       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:44.872392       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:44.873038       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:44.873160       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:54.862953       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:54.862990       1 main.go:300] handling current node
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:54.863013       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:54.863022       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:54.863377       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:54.863412       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:40:04.864160       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:04.864198       1 main.go:300] handling current node
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:04.864216       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:04.864222       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:04.864390       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:04.864399       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:14.862864       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:14.863081       1 main.go:300] handling current node
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:14.863442       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:14.863496       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:14.864019       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:14.864052       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:24.867383       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:24.867717       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:24.868487       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:24.868619       1 main.go:300] handling current node
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:24.868640       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:24.868650       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:34.866060       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:34.866194       1 main.go:300] handling current node
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:34.866224       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:34.866240       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:34.867632       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:34.867868       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:44.875002       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:44.875336       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:44.875792       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:44.875991       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:44.876302       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:44.876531       1 main.go:300] handling current node
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:54.862504       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:54.862640       1 main.go:300] handling current node
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:54.862766       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:54.862834       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:54.863108       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:54.863140       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:04.863181       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:04.863304       1 main.go:300] handling current node
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:04.863326       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:04.863335       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:04.863824       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:04.863963       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:14.868270       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:14.868443       1 main.go:300] handling current node
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:14.868487       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:14.868541       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:14.868808       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:14.868843       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:24.862261       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:24.862508       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:24.863242       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:24.863792       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:24.864172       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:24.864327       1 main.go:300] handling current node
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:34.862294       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:34.862355       1 main.go:300] handling current node
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:34.862377       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:34.862385       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:44.862674       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:44.862799       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:44.863254       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.20.102.29 Flags: [] Table: 0 Realm: 0} 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:44.863509       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:44.863768       1 main.go:300] handling current node
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:44.863945       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:44.864052       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:54.862083       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:54.862208       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:54.862577       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:54.862723       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:54.863005       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:54.863097       1 main.go:300] handling current node
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:42:04.870504       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:42:04.871039       1 main.go:300] handling current node
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:42:04.871167       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:42:04.871277       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:42:04.871721       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:42:04.871740       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:42:14.862252       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:42:14.862455       1 main.go:300] handling current node
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:42:14.862499       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:14.862521       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:14.863189       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:14.863224       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:24.862819       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:24.863072       1 main.go:300] handling current node
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:24.863093       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:24.863103       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:24.864093       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:24.864136       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:34.863373       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:34.863425       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:34.863670       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:34.863742       1 main.go:300] handling current node
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:34.863763       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:34.863771       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:44.861842       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:44.862176       1 main.go:300] handling current node
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:44.862271       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:44.862357       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:44.862743       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:44.863009       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:54.863140       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:54.863181       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:54.863865       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:54.864051       1 main.go:300] handling current node
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:54.864417       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:54.864427       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:04.862539       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:04.862625       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:04.863289       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:04.863395       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:04.863612       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:04.863764       1 main.go:300] handling current node
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:14.871242       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:14.871727       1 main.go:300] handling current node
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:14.871818       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:14.871846       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:14.872085       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:14.872201       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:24.871405       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:24.871540       1 main.go:300] handling current node
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:24.871566       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:24.871575       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:24.871835       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:24.872193       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:34.863042       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:34.863237       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:34.863962       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:34.864059       1 main.go:300] handling current node
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:34.864077       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:34.864085       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:44.871016       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:44.871057       1 main.go:300] handling current node
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:44.871074       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:44.871081       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:44.871299       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:44.871310       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.127469   15224 logs.go:123] Gathering logs for dmesg ...
	I1014 08:47:28.127469   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 08:47:28.150692   15224 command_runner.go:130] > [Oct14 15:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I1014 08:47:28.150760   15224 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I1014 08:47:28.150760   15224 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I1014 08:47:28.150760   15224 command_runner.go:130] > [  +0.121183] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I1014 08:47:28.150821   15224 command_runner.go:130] > [  +0.024192] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I1014 08:47:28.150846   15224 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +0.058588] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +0.021951] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I1014 08:47:28.150875   15224 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +5.764502] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +0.701221] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +1.823727] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +7.351082] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I1014 08:47:28.150875   15224 command_runner.go:130] > [Oct14 15:45] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +0.175163] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	I1014 08:47:28.150875   15224 command_runner.go:130] > [ +26.061812] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +0.098944] kauditd_printk_skb: 71 callbacks suppressed
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +0.531295] systemd-fstab-generator[1053]: Ignoring "noauto" option for root device
	I1014 08:47:28.150875   15224 command_runner.go:130] > [Oct14 15:46] systemd-fstab-generator[1065]: Ignoring "noauto" option for root device
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +0.229472] systemd-fstab-generator[1079]: Ignoring "noauto" option for root device
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +2.943333] systemd-fstab-generator[1310]: Ignoring "noauto" option for root device
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +0.192845] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +0.209914] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +0.290916] systemd-fstab-generator[1348]: Ignoring "noauto" option for root device
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +0.928050] systemd-fstab-generator[1473]: Ignoring "noauto" option for root device
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +0.103044] kauditd_printk_skb: 202 callbacks suppressed
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +3.884891] systemd-fstab-generator[1614]: Ignoring "noauto" option for root device
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +1.232270] kauditd_printk_skb: 44 callbacks suppressed
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +5.880292] kauditd_printk_skb: 30 callbacks suppressed
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +4.216972] systemd-fstab-generator[2460]: Ignoring "noauto" option for root device
	I1014 08:47:28.150875   15224 command_runner.go:130] > [ +15.813728] kauditd_printk_skb: 72 callbacks suppressed
	I1014 08:47:28.152624   15224 logs.go:123] Gathering logs for coredns [5d223e2e64fc] ...
	I1014 08:47:28.152624   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d223e2e64fc"
	I1014 08:47:28.182696   15224 command_runner.go:130] > .:53
	I1014 08:47:28.182696   15224 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	I1014 08:47:28.182696   15224 command_runner.go:130] > CoreDNS-1.11.3
	I1014 08:47:28.182844   15224 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I1014 08:47:28.182844   15224 command_runner.go:130] > [INFO] 127.0.0.1:42996 - 9104 "HINFO IN 5434967794797104596.5472118418078127170. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.148386647s
	I1014 08:47:28.183199   15224 logs.go:123] Gathering logs for kindnet [bba035362eb9] ...
	I1014 08:47:28.183199   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bba035362eb9"
	I1014 08:47:28.209334   15224 command_runner.go:130] ! I1014 15:46:18.000845       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1014 08:47:28.210651   15224 command_runner.go:130] ! I1014 15:46:18.015386       1 main.go:139] hostIP = 172.20.106.123
	I1014 08:47:28.211385   15224 command_runner.go:130] ! podIP = 172.20.106.123
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:46:18.015613       1 main.go:148] setting mtu 1500 for CNI 
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:46:18.015630       1 main.go:178] kindnetd IP family: "ipv4"
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:46:18.015641       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:46:18.919987       1 main.go:238] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-37: Error: Could not process rule: Operation not supported
	I1014 08:47:28.211385   15224 command_runner.go:130] ! add table inet kube-network-policies
	I1014 08:47:28.211385   15224 command_runner.go:130] ! ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	I1014 08:47:28.211385   15224 command_runner.go:130] ! , skipping network policies
	I1014 08:47:28.211385   15224 command_runner.go:130] ! W1014 15:46:48.934772       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1014 08:47:28.211385   15224 command_runner.go:130] ! E1014 15:46:48.935157       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError"
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:46:58.925780       1 main.go:296] Handling node with IPs: map[172.20.106.123:{}]
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:46:58.926393       1 main.go:300] handling current node
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:46:58.927562       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:46:58.927665       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:46:58.928645       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.20.109.137 Flags: [] Table: 0 Realm: 0} 
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:46:58.929412       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:46:58.929466       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:46:58.929555       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.20.102.29 Flags: [] Table: 0 Realm: 0} 
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:47:08.930440       1 main.go:296] Handling node with IPs: map[172.20.106.123:{}]
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:47:08.930586       1 main.go:300] handling current node
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:47:08.930648       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:47:08.930739       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:47:08.931080       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:47:08.931268       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:47:18.921538       1 main.go:296] Handling node with IPs: map[172.20.106.123:{}]
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:47:18.921639       1 main.go:300] handling current node
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:47:18.921689       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:47:18.921698       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:47:18.922117       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:47:18.922190       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.214324   15224 logs.go:123] Gathering logs for kubelet ...
	I1014 08:47:28.214324   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:05 multinode-671000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1480]: I1014 15:46:06.037054    1480 server.go:486] "Kubelet version" kubeletVersion="v1.31.1"
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1480]: I1014 15:46:06.037147    1480 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1480]: I1014 15:46:06.038385    1480 server.go:929] "Client rotation is on, will bootstrap in background"
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1480]: E1014 15:46:06.039788    1480 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1540]: I1014 15:46:06.835721    1540 server.go:486] "Kubelet version" kubeletVersion="v1.31.1"
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1540]: I1014 15:46:06.835931    1540 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1540]: I1014 15:46:06.836250    1540 server.go:929] "Client rotation is on, will bootstrap in background"
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1540]: E1014 15:46:06.836463    1540 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:07 multinode-671000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.687712    1622 server.go:486] "Kubelet version" kubeletVersion="v1.31.1"
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.688474    1622 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.689105    1622 server.go:929] "Client rotation is on, will bootstrap in background"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.691939    1622 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.718455    1622 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.739709    1622 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.739760    1622 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.744155    1622 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.744395    1622 server.go:812] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.744486    1622 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.744668    1622 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.744761    1622 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-671000","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.746378    1622 topology_manager.go:138] "Creating topology manager with none policy"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.746460    1622 container_manager_linux.go:300] "Creating device plugin manager"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.746633    1622 state_mem.go:36] "Initialized new in-memory state store"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.749964    1622 kubelet.go:408] "Attempting to sync node with API server"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.750004    1622 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.750036    1622 kubelet.go:314] "Adding apiserver pod source"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.750844    1622 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.756693    1622 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="docker" version="27.3.1" apiVersion="v1"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.763816    1622 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: W1014 15:46:09.764725    1622 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.766925    1622 server.go:1269] "Started kubelet"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: W1014 15:46:09.767088    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-671000&limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.767172    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-671000&limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: W1014 15:46:09.769189    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.769350    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.769454    1622 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.770134    1622 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.772237    1622 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.20.106.123:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-671000.17fe5c47a6bff791  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-671000,UID:multinode-671000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-671000,},FirstTimestamp:2024-10-14 15:46:09.766881169 +0000 UTC m=+0.158153075,LastTimestamp:2024-10-14 15:46:09.766881169 +0000 UTC m=+0.158153075,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-6
71000,}"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.773096    1622 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.774576    1622 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.777686    1622 server.go:460] "Adding debug handlers to kubelet server"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.780950    1622 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.788697    1622 volume_manager.go:289] "Starting Kubelet Volume Manager"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.789003    1622 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"multinode-671000\" not found"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.789640    1622 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.800447    1622 factory.go:221] Registration of the systemd container factory successfully
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.800536    1622 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.800587    1622 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: W1014 15:46:09.811192    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.811498    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.812017    1622 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-671000?timeout=10s\": dial tcp 172.20.106.123:8443: connect: connection refused" interval="200ms"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.863497    1622 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.863530    1622 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.863554    1622 state_mem.go:36] "Initialized new in-memory state store"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.868881    1622 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.868953    1622 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.868995    1622 policy_none.go:49] "None policy: Start"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.872200    1622 reconciler.go:26] "Reconciler: start to sync state"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.877834    1622 memory_manager.go:170] "Starting memorymanager" policy="None"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.877929    1622 state_mem.go:35] "Initializing new in-memory state store"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.878704    1622 state_mem.go:75] "Updated machine memory state"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.884555    1622 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.885687    1622 eviction_manager.go:189] "Eviction manager: starting control loop"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.885828    1622 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.889524    1622 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-671000\" not found"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.892062    1622 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.900012    1622 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.905094    1622 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.906277    1622 status_manager.go:217] "Starting to sync pod status with apiserver"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.906885    1622 kubelet.go:2321] "Starting kubelet main sync loop"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.907458    1622 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: W1014 15:46:09.914061    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.914371    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.933056    1622 iptables.go:577] "Could not set up iptables canary" err=<
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.987581    1622 kubelet_node_status.go:72] "Attempting to register node" node="multinode-671000"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.988812    1622 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.20.106.123:8443: connect: connection refused" node="multinode-671000"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.008458    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f8cc9a218fef4446839b0253827fc2dd89f8eccbd66ab7f0e5123c033f0aacc"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.013887    1622 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-671000?timeout=10s\": dial tcp 172.20.106.123:8443: connect: connection refused" interval="400ms"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.014354    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5733d27d2f1c328dbd19f6392a86e426f344b6f17c65211404fa797e84b69c9"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.014436    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06e529266db4b2cf3ad09285cddeb89d7d11e858b5156cf73f41198c9500b9f2"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.014506    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e48ddcfdf90ad3bfbe621f27c97a331f448947ca77dbd98ab3c9daef2c84e22"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.020161    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dc78387553ff4b78626f5e6aa103a40ec97f42ef49363e27d7d3698cd0df26f"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.035902    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c6be2bd1889b6c5f021362c07c3a88f7f0ff266bb9e8ba4106d666b0f1d267d"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.049024    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1863de70f2316e54fa61ef7c5c6aba94808669b81b1cc811dce745011ee807cb"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.065264    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7144d8ce208cf8c176ad1fc9980a72d450a3d558c4f8f9ee453dea6b22358085"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.079145    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfdde08319e32b93d740933d5ab50829de8f9f3edacce92efe155b4ada4f4212"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.179820    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/864b69f35cb25e9dd5d87a753a055a10-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-671000\" (UID: \"864b69f35cb25e9dd5d87a753a055a10\") " pod="kube-system/kube-apiserver-multinode-671000"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.179915    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b83e31864e0ae98d29d960866012ecb0-k8s-certs\") pod \"kube-controller-manager-multinode-671000\" (UID: \"b83e31864e0ae98d29d960866012ecb0\") " pod="kube-system/kube-controller-manager-multinode-671000"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.179945    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b83e31864e0ae98d29d960866012ecb0-kubeconfig\") pod \"kube-controller-manager-multinode-671000\" (UID: \"b83e31864e0ae98d29d960866012ecb0\") " pod="kube-system/kube-controller-manager-multinode-671000"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.179963    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3e987cfaedc75c39145e8fc131c60c81-kubeconfig\") pod \"kube-scheduler-multinode-671000\" (UID: \"3e987cfaedc75c39145e8fc131c60c81\") " pod="kube-system/kube-scheduler-multinode-671000"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.179984    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/679486a8f27de5805bc2e87fb1920dce-etcd-certs\") pod \"etcd-multinode-671000\" (UID: \"679486a8f27de5805bc2e87fb1920dce\") " pod="kube-system/etcd-multinode-671000"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180012    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/679486a8f27de5805bc2e87fb1920dce-etcd-data\") pod \"etcd-multinode-671000\" (UID: \"679486a8f27de5805bc2e87fb1920dce\") " pod="kube-system/etcd-multinode-671000"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180036    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/864b69f35cb25e9dd5d87a753a055a10-ca-certs\") pod \"kube-apiserver-multinode-671000\" (UID: \"864b69f35cb25e9dd5d87a753a055a10\") " pod="kube-system/kube-apiserver-multinode-671000"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180050    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/864b69f35cb25e9dd5d87a753a055a10-k8s-certs\") pod \"kube-apiserver-multinode-671000\" (UID: \"864b69f35cb25e9dd5d87a753a055a10\") " pod="kube-system/kube-apiserver-multinode-671000"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180068    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b83e31864e0ae98d29d960866012ecb0-ca-certs\") pod \"kube-controller-manager-multinode-671000\" (UID: \"b83e31864e0ae98d29d960866012ecb0\") " pod="kube-system/kube-controller-manager-multinode-671000"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180089    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b83e31864e0ae98d29d960866012ecb0-flexvolume-dir\") pod \"kube-controller-manager-multinode-671000\" (UID: \"b83e31864e0ae98d29d960866012ecb0\") " pod="kube-system/kube-controller-manager-multinode-671000"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180113    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b83e31864e0ae98d29d960866012ecb0-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-671000\" (UID: \"b83e31864e0ae98d29d960866012ecb0\") " pod="kube-system/kube-controller-manager-multinode-671000"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.191857    1622 kubelet_node_status.go:72] "Attempting to register node" node="multinode-671000"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.193195    1622 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.20.106.123:8443: connect: connection refused" node="multinode-671000"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.421148    1622 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-671000?timeout=10s\": dial tcp 172.20.106.123:8443: connect: connection refused" interval="800ms"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.595286    1622 kubelet_node_status.go:72] "Attempting to register node" node="multinode-671000"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.596178    1622 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.20.106.123:8443: connect: connection refused" node="multinode-671000"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: W1014 15:46:10.601172    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.601259    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: W1014 15:46:10.913794    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.913870    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: W1014 15:46:11.078571    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: E1014 15:46:11.078638    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: W1014 15:46:11.151154    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-671000&limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: E1014 15:46:11.151247    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-671000&limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: E1014 15:46:11.223425    1622 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-671000?timeout=10s\": dial tcp 172.20.106.123:8443: connect: connection refused" interval="1.6s"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: I1014 15:46:11.306759    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0697a11790e806fa4d679e3d97be4c7193692d1cb8f76d882cf3a75aa8e0c238"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: I1014 15:46:11.397496    1622 kubelet_node_status.go:72] "Attempting to register node" node="multinode-671000"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: E1014 15:46:11.399409    1622 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.20.106.123:8443: connect: connection refused" node="multinode-671000"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:13 multinode-671000 kubelet[1622]: I1014 15:46:13.001489    1622 kubelet_node_status.go:72] "Attempting to register node" node="multinode-671000"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.316022    1622 kubelet_node_status.go:111] "Node was previously registered" node="multinode-671000"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.316194    1622 kubelet_node_status.go:75] "Successfully registered node" node="multinode-671000"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.316226    1622 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.317405    1622 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.318741    1622 setters.go:600] "Node became not ready" node="multinode-671000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-10-14T15:46:15Z","lastTransitionTime":"2024-10-14T15:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.671751    1622 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-multinode-671000\" already exists" pod="kube-system/etcd-multinode-671000"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.765668    1622 apiserver.go:52] "Watching apiserver"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.771464    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.772813    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.774456    1622 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-671000" podUID="80ea37b8-9db1-4a39-9e9e-51c01edadfb1"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.790436    1622 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.804744    1622 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-671000"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.875635    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8d14473-8859-4015-84e9-d00656cc00c9-xtables-lock\") pod \"kube-proxy-r74dx\" (UID: \"f8d14473-8859-4015-84e9-d00656cc00c9\") " pod="kube-system/kube-proxy-r74dx"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.875831    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a508bbf9-7565-4c73-98cb-e9684985c298-xtables-lock\") pod \"kindnet-wqrx6\" (UID: \"a508bbf9-7565-4c73-98cb-e9684985c298\") " pod="kube-system/kindnet-wqrx6"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.876217    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8d14473-8859-4015-84e9-d00656cc00c9-lib-modules\") pod \"kube-proxy-r74dx\" (UID: \"f8d14473-8859-4015-84e9-d00656cc00c9\") " pod="kube-system/kube-proxy-r74dx"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.876424    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fde8ff75-bc7f-4db4-b098-c3a08b38d205-tmp\") pod \"storage-provisioner\" (UID: \"fde8ff75-bc7f-4db4-b098-c3a08b38d205\") " pod="kube-system/storage-provisioner"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.876537    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a508bbf9-7565-4c73-98cb-e9684985c298-cni-cfg\") pod \"kindnet-wqrx6\" (UID: \"a508bbf9-7565-4c73-98cb-e9684985c298\") " pod="kube-system/kindnet-wqrx6"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.876562    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a508bbf9-7565-4c73-98cb-e9684985c298-lib-modules\") pod \"kindnet-wqrx6\" (UID: \"a508bbf9-7565-4c73-98cb-e9684985c298\") " pod="kube-system/kindnet-wqrx6"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.877886    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.877952    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:16.377930736 +0000 UTC m=+6.769202642 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.896550    1622 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.904462    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.904557    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.904737    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:16.404658149 +0000 UTC m=+6.795930055 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.919872    1622 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cf38ccc62eb74f6e658e1f66ae8cab1" path="/var/lib/kubelet/pods/3cf38ccc62eb74f6e658e1f66ae8cab1/volumes"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.921055    1622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-671000" podStartSLOduration=0.921039556 podStartE2EDuration="921.039556ms" podCreationTimestamp="2024-10-14 15:46:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-14 15:46:15.920643156 +0000 UTC m=+6.311915162" watchObservedRunningTime="2024-10-14 15:46:15.921039556 +0000 UTC m=+6.312311562"
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.921516    1622 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="778fdb620bffec66f911bf24e3c8210b" path="/var/lib/kubelet/pods/778fdb620bffec66f911bf24e3c8210b/volumes"
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.380142    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.380233    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:17.380214172 +0000 UTC m=+7.771486078 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.480798    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.480831    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.480915    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:17.480897019 +0000 UTC m=+7.872168925 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: I1014 15:46:16.655226    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f8bdf552734e59df94d489a081585cdaeeaee729f0a0ae92105117d4d744f3f"
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: I1014 15:46:16.670380    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdcdd532ba1369fe0e866b321f18e0bd99a88a2699c325eea17a070d48f2f024"
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: I1014 15:46:16.981444    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bcadf1f0885ff3411b477a64c6af366c77eb264ce7a761f174b079b11e2e703"
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: I1014 15:46:16.981500    1622 kubelet.go:1895] "Trying to delete pod" pod="kube-system/etcd-multinode-671000" podUID="56dfdf16-1224-41e3-94de-9d7f4021a17d"
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.982831    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: I1014 15:46:17.011276    1622 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-671000"
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.388224    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.388370    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:19.388351245 +0000 UTC m=+9.779623151 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.489591    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.489649    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.489828    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:19.489808492 +0000 UTC m=+9.881080398 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.915482    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:18 multinode-671000 kubelet[1622]: I1014 15:46:18.163696    1622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-671000" podStartSLOduration=1.163677409 podStartE2EDuration="1.163677409s" podCreationTimestamp="2024-10-14 15:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-14 15:46:18.133766095 +0000 UTC m=+8.525038101" watchObservedRunningTime="2024-10-14 15:46:18.163677409 +0000 UTC m=+8.554949415"
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:18 multinode-671000 kubelet[1622]: E1014 15:46:18.908674    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.405477    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.405614    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:23.405594191 +0000 UTC m=+13.796866097 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.506858    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.507035    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.507122    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:23.507105839 +0000 UTC m=+13.898377845 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.931507    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:20 multinode-671000 kubelet[1622]: E1014 15:46:20.907760    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:21 multinode-671000 kubelet[1622]: E1014 15:46:21.908994    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:22 multinode-671000 kubelet[1622]: E1014 15:46:22.908657    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.462111    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.462203    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:31.462185592 +0000 UTC m=+21.853457598 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.562508    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.562563    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.562768    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:31.562650785 +0000 UTC m=+21.953922691 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.910119    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:24 multinode-671000 kubelet[1622]: E1014 15:46:24.908917    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:25 multinode-671000 kubelet[1622]: E1014 15:46:25.909505    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:26 multinode-671000 kubelet[1622]: E1014 15:46:26.907750    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:27 multinode-671000 kubelet[1622]: E1014 15:46:27.908822    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:28 multinode-671000 kubelet[1622]: E1014 15:46:28.908219    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:29 multinode-671000 kubelet[1622]: E1014 15:46:29.910218    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:30 multinode-671000 kubelet[1622]: E1014 15:46:30.908259    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.541520    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.541653    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:47.541634578 +0000 UTC m=+37.932906484 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.641930    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.641961    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.642009    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:47.641990935 +0000 UTC m=+38.033262841 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.908383    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:32 multinode-671000 kubelet[1622]: E1014 15:46:32.908527    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:33 multinode-671000 kubelet[1622]: E1014 15:46:33.910838    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:34 multinode-671000 kubelet[1622]: E1014 15:46:34.908180    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:35 multinode-671000 kubelet[1622]: E1014 15:46:35.908574    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:36 multinode-671000 kubelet[1622]: E1014 15:46:36.907722    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:37 multinode-671000 kubelet[1622]: E1014 15:46:37.907861    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:38 multinode-671000 kubelet[1622]: E1014 15:46:38.908728    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:39 multinode-671000 kubelet[1622]: E1014 15:46:39.908994    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:40 multinode-671000 kubelet[1622]: E1014 15:46:40.908676    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:41 multinode-671000 kubelet[1622]: E1014 15:46:41.909525    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:42 multinode-671000 kubelet[1622]: E1014 15:46:42.908679    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:43 multinode-671000 kubelet[1622]: E1014 15:46:43.908615    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:44 multinode-671000 kubelet[1622]: E1014 15:46:44.908884    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:45 multinode-671000 kubelet[1622]: E1014 15:46:45.908370    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:46 multinode-671000 kubelet[1622]: E1014 15:46:46.909263    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.573240    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.573353    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:47:19.573334644 +0000 UTC m=+69.964606650 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.673810    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.673907    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.674014    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:47:19.673994259 +0000 UTC m=+70.065266165 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.908883    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:48 multinode-671000 kubelet[1622]: I1014 15:46:48.486803    1622 scope.go:117] "RemoveContainer" containerID="3d8b7bae48a59c755a1ffda14e7fdd0c2302b394db67b7de21fd5b819dad243b"
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:48 multinode-671000 kubelet[1622]: I1014 15:46:48.487259    1622 scope.go:117] "RemoveContainer" containerID="c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6"
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:48 multinode-671000 kubelet[1622]: E1014 15:46:48.487448    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(fde8ff75-bc7f-4db4-b098-c3a08b38d205)\"" pod="kube-system/storage-provisioner" podUID="fde8ff75-bc7f-4db4-b098-c3a08b38d205"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:46:48 multinode-671000 kubelet[1622]: E1014 15:46:48.908732    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:46:49 multinode-671000 kubelet[1622]: E1014 15:46:49.908877    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:46:50 multinode-671000 kubelet[1622]: E1014 15:46:50.907718    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:46:51 multinode-671000 kubelet[1622]: E1014 15:46:51.909552    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:46:52 multinode-671000 kubelet[1622]: E1014 15:46:52.908818    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:46:53 multinode-671000 kubelet[1622]: E1014 15:46:53.908389    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:46:54 multinode-671000 kubelet[1622]: E1014 15:46:54.908089    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:46:55 multinode-671000 kubelet[1622]: E1014 15:46:55.908582    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:46:56 multinode-671000 kubelet[1622]: E1014 15:46:56.908839    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:46:57 multinode-671000 kubelet[1622]: E1014 15:46:57.909489    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:46:58 multinode-671000 kubelet[1622]: E1014 15:46:58.908804    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:46:59 multinode-671000 kubelet[1622]: I1014 15:46:59.853068    1622 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:47:02 multinode-671000 kubelet[1622]: I1014 15:47:02.908981    1622 scope.go:117] "RemoveContainer" containerID="c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]: I1014 15:47:09.901385    1622 scope.go:117] "RemoveContainer" containerID="0b5a6e440d7b67606ed0a4dfa4d07715b1fd7e6f53bc0b8779f86a33c5baf6e9"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]: I1014 15:47:09.946936    1622 scope.go:117] "RemoveContainer" containerID="1ba3cd8bbd5963097f4d674fc98eca21e1a710f5a150a067747aa4e6c922d2fe"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]: E1014 15:47:09.949713    1622 iptables.go:577] "Could not set up iptables canary" err=<
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I1014 08:47:28.297321   15224 logs.go:123] Gathering logs for kube-apiserver [a834664fc8b8] ...
	I1014 08:47:28.297321   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a834664fc8b8"
	I1014 08:47:28.323819   15224 command_runner.go:130] ! I1014 15:46:12.133612       1 options.go:228] external host was not specified, using 172.20.106.123
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:12.139596       1 server.go:142] Version: v1.31.1
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:12.140322       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:13.070213       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:13.112422       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:13.116622       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:13.116890       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:13.117611       1 instance.go:232] Using reconciler: lease
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:13.606403       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I1014 08:47:28.324079   15224 command_runner.go:130] ! W1014 15:46:13.606961       1 genericapiserver.go:765] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:13.910757       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:13.911096       1 apis.go:105] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:14.140196       1 apis.go:105] API group "storagemigration.k8s.io" is not enabled, skipping.
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:14.332586       1 apis.go:105] API group "resource.k8s.io" is not enabled, skipping.
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:14.344695       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I1014 08:47:28.324079   15224 command_runner.go:130] ! W1014 15:46:14.344792       1 genericapiserver.go:765] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:28.324079   15224 command_runner.go:130] ! W1014 15:46:14.344802       1 genericapiserver.go:765] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:14.345547       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I1014 08:47:28.324079   15224 command_runner.go:130] ! W1014 15:46:14.345645       1 genericapiserver.go:765] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:14.346729       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:14.348142       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I1014 08:47:28.324079   15224 command_runner.go:130] ! W1014 15:46:14.348261       1 genericapiserver.go:765] Skipping API autoscaling/v2beta1 because it has no resources.
	I1014 08:47:28.324626   15224 command_runner.go:130] ! W1014 15:46:14.348272       1 genericapiserver.go:765] Skipping API autoscaling/v2beta2 because it has no resources.
	I1014 08:47:28.324626   15224 command_runner.go:130] ! I1014 15:46:14.350632       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I1014 08:47:28.324626   15224 command_runner.go:130] ! W1014 15:46:14.350741       1 genericapiserver.go:765] Skipping API batch/v1beta1 because it has no resources.
	I1014 08:47:28.324626   15224 command_runner.go:130] ! I1014 15:46:14.352378       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I1014 08:47:28.324707   15224 command_runner.go:130] ! W1014 15:46:14.352489       1 genericapiserver.go:765] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:28.324707   15224 command_runner.go:130] ! W1014 15:46:14.352501       1 genericapiserver.go:765] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:28.324707   15224 command_runner.go:130] ! I1014 15:46:14.353674       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I1014 08:47:28.324707   15224 command_runner.go:130] ! W1014 15:46:14.353813       1 genericapiserver.go:765] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:28.324707   15224 command_runner.go:130] ! W1014 15:46:14.353843       1 genericapiserver.go:765] Skipping API coordination.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:28.324777   15224 command_runner.go:130] ! I1014 15:46:14.355117       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I1014 08:47:28.324816   15224 command_runner.go:130] ! W1014 15:46:14.355256       1 genericapiserver.go:765] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:28.324816   15224 command_runner.go:130] ! I1014 15:46:14.358401       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I1014 08:47:28.324857   15224 command_runner.go:130] ! W1014 15:46:14.358517       1 genericapiserver.go:765] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:28.324857   15224 command_runner.go:130] ! W1014 15:46:14.358528       1 genericapiserver.go:765] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:28.324857   15224 command_runner.go:130] ! I1014 15:46:14.359534       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I1014 08:47:28.324911   15224 command_runner.go:130] ! W1014 15:46:14.359632       1 genericapiserver.go:765] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:28.324911   15224 command_runner.go:130] ! W1014 15:46:14.359643       1 genericapiserver.go:765] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:28.324947   15224 command_runner.go:130] ! I1014 15:46:14.360836       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I1014 08:47:28.324972   15224 command_runner.go:130] ! W1014 15:46:14.360942       1 genericapiserver.go:765] Skipping API policy/v1beta1 because it has no resources.
	I1014 08:47:28.324972   15224 command_runner.go:130] ! I1014 15:46:14.363702       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I1014 08:47:28.325035   15224 command_runner.go:130] ! W1014 15:46:14.363848       1 genericapiserver.go:765] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:28.325035   15224 command_runner.go:130] ! W1014 15:46:14.363860       1 genericapiserver.go:765] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:14.364685       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I1014 08:47:28.325082   15224 command_runner.go:130] ! W1014 15:46:14.364801       1 genericapiserver.go:765] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:28.325082   15224 command_runner.go:130] ! W1014 15:46:14.364812       1 genericapiserver.go:765] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:14.368101       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I1014 08:47:28.325082   15224 command_runner.go:130] ! W1014 15:46:14.368216       1 genericapiserver.go:765] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:28.325082   15224 command_runner.go:130] ! W1014 15:46:14.368228       1 genericapiserver.go:765] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:14.370008       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:14.371702       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I1014 08:47:28.325082   15224 command_runner.go:130] ! W1014 15:46:14.371808       1 genericapiserver.go:765] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I1014 08:47:28.325082   15224 command_runner.go:130] ! W1014 15:46:14.371818       1 genericapiserver.go:765] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:14.376771       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I1014 08:47:28.325082   15224 command_runner.go:130] ! W1014 15:46:14.376868       1 genericapiserver.go:765] Skipping API apps/v1beta2 because it has no resources.
	I1014 08:47:28.325082   15224 command_runner.go:130] ! W1014 15:46:14.376877       1 genericapiserver.go:765] Skipping API apps/v1beta1 because it has no resources.
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:14.379998       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I1014 08:47:28.325082   15224 command_runner.go:130] ! W1014 15:46:14.380101       1 genericapiserver.go:765] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:28.325082   15224 command_runner.go:130] ! W1014 15:46:14.380112       1 genericapiserver.go:765] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:14.380956       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I1014 08:47:28.325082   15224 command_runner.go:130] ! W1014 15:46:14.381059       1 genericapiserver.go:765] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:14.395072       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I1014 08:47:28.325082   15224 command_runner.go:130] ! W1014 15:46:14.395116       1 genericapiserver.go:765] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:15.014537       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:15.014702       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:15.016123       1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:15.016823       1 secure_serving.go:213] Serving securely on [::]:8443
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:15.017426       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:15.018450       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:15.018766       1 remote_available_controller.go:411] Starting RemoteAvailability controller
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:15.018850       1 cache.go:32] Waiting for caches to sync for RemoteAvailability controller
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:15.018985       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:15.021391       1 controller.go:119] Starting legacy_token_tracking_controller
	I1014 08:47:28.325620   15224 command_runner.go:130] ! I1014 15:46:15.021471       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I1014 08:47:28.325620   15224 command_runner.go:130] ! I1014 15:46:15.021517       1 aggregator.go:169] waiting for initial CRD sync...
	I1014 08:47:28.325684   15224 command_runner.go:130] ! I1014 15:46:15.022050       1 dynamic_serving_content.go:135] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1014 08:47:28.325684   15224 command_runner.go:130] ! I1014 15:46:15.022573       1 cluster_authentication_trust_controller.go:443] Starting cluster_authentication_trust_controller controller
	I1014 08:47:28.325756   15224 command_runner.go:130] ! I1014 15:46:15.022688       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I1014 08:47:28.325756   15224 command_runner.go:130] ! I1014 15:46:15.022775       1 system_namespaces_controller.go:66] Starting system namespaces controller
	I1014 08:47:28.325787   15224 command_runner.go:130] ! I1014 15:46:15.026778       1 apf_controller.go:377] Starting API Priority and Fairness config controller
	I1014 08:47:28.325787   15224 command_runner.go:130] ! I1014 15:46:15.027043       1 controller.go:78] Starting OpenAPI AggregationController
	I1014 08:47:28.325787   15224 command_runner.go:130] ! I1014 15:46:15.027942       1 customresource_discovery_controller.go:292] Starting DiscoveryController
	I1014 08:47:28.325787   15224 command_runner.go:130] ! I1014 15:46:15.029402       1 local_available_controller.go:156] Starting LocalAvailability controller
	I1014 08:47:28.325787   15224 command_runner.go:130] ! I1014 15:46:15.029447       1 cache.go:32] Waiting for caches to sync for LocalAvailability controller
	I1014 08:47:28.325787   15224 command_runner.go:130] ! I1014 15:46:15.029815       1 apiservice_controller.go:100] Starting APIServiceRegistrationController
	I1014 08:47:28.325787   15224 command_runner.go:130] ! I1014 15:46:15.029850       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I1014 08:47:28.325862   15224 command_runner.go:130] ! I1014 15:46:15.034040       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I1014 08:47:28.325862   15224 command_runner.go:130] ! I1014 15:46:15.034136       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I1014 08:47:28.325862   15224 command_runner.go:130] ! I1014 15:46:15.034690       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 08:47:28.325862   15224 command_runner.go:130] ! I1014 15:46:15.034946       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1014 08:47:28.325862   15224 command_runner.go:130] ! I1014 15:46:15.082229       1 controller.go:142] Starting OpenAPI controller
	I1014 08:47:28.325940   15224 command_runner.go:130] ! I1014 15:46:15.083838       1 controller.go:90] Starting OpenAPI V3 controller
	I1014 08:47:28.325940   15224 command_runner.go:130] ! I1014 15:46:15.083894       1 naming_controller.go:294] Starting NamingConditionController
	I1014 08:47:28.325940   15224 command_runner.go:130] ! I1014 15:46:15.086443       1 establishing_controller.go:81] Starting EstablishingController
	I1014 08:47:28.325940   15224 command_runner.go:130] ! I1014 15:46:15.087455       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I1014 08:47:28.325940   15224 command_runner.go:130] ! I1014 15:46:15.088333       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1014 08:47:28.325940   15224 command_runner.go:130] ! I1014 15:46:15.092677       1 crd_finalizer.go:269] Starting CRDFinalizer
	I1014 08:47:28.325940   15224 command_runner.go:130] ! I1014 15:46:15.212597       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1014 08:47:28.325940   15224 command_runner.go:130] ! I1014 15:46:15.212691       1 policy_source.go:224] refreshing policies
	I1014 08:47:28.326042   15224 command_runner.go:130] ! I1014 15:46:15.221529       1 shared_informer.go:320] Caches are synced for configmaps
	I1014 08:47:28.326042   15224 command_runner.go:130] ! I1014 15:46:15.226910       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1014 08:47:28.326042   15224 command_runner.go:130] ! I1014 15:46:15.227013       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1014 08:47:28.326042   15224 command_runner.go:130] ! I1014 15:46:15.229937       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1014 08:47:28.326042   15224 command_runner.go:130] ! I1014 15:46:15.231898       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 08:47:28.326122   15224 command_runner.go:130] ! I1014 15:46:15.233234       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1014 08:47:28.326122   15224 command_runner.go:130] ! I1014 15:46:15.234375       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1014 08:47:28.326122   15224 command_runner.go:130] ! I1014 15:46:15.235151       1 aggregator.go:171] initial CRD sync complete...
	I1014 08:47:28.326228   15224 command_runner.go:130] ! I1014 15:46:15.235400       1 autoregister_controller.go:144] Starting autoregister controller
	I1014 08:47:28.326256   15224 command_runner.go:130] ! I1014 15:46:15.235712       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1014 08:47:28.326288   15224 command_runner.go:130] ! I1014 15:46:15.235936       1 cache.go:39] Caches are synced for autoregister controller
	I1014 08:47:28.326288   15224 command_runner.go:130] ! I1014 15:46:15.255261       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1014 08:47:28.326288   15224 command_runner.go:130] ! I1014 15:46:15.256039       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1014 08:47:28.326288   15224 command_runner.go:130] ! I1014 15:46:15.271561       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1014 08:47:28.326288   15224 command_runner.go:130] ! I1014 15:46:15.319091       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1014 08:47:28.326288   15224 command_runner.go:130] ! I1014 15:46:16.036564       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 08:47:28.326288   15224 command_runner.go:130] ! W1014 15:46:16.558489       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.20.100.167 172.20.106.123]
	I1014 08:47:28.326288   15224 command_runner.go:130] ! I1014 15:46:16.560272       1 controller.go:615] quota admission added evaluator for: endpoints
	I1014 08:47:28.326288   15224 command_runner.go:130] ! I1014 15:46:16.573015       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 08:47:28.326288   15224 command_runner.go:130] ! I1014 15:46:18.229365       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1014 08:47:28.326288   15224 command_runner.go:130] ! I1014 15:46:18.748102       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1014 08:47:28.326288   15224 command_runner.go:130] ! I1014 15:46:18.793266       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1014 08:47:28.326288   15224 command_runner.go:130] ! I1014 15:46:18.985788       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 08:47:28.326288   15224 command_runner.go:130] ! I1014 15:46:19.024530       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 08:47:28.326288   15224 command_runner.go:130] ! W1014 15:46:36.563040       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.20.106.123]
	I1014 08:47:28.336942   15224 logs.go:123] Gathering logs for kube-scheduler [661e75bbf6b4] ...
	I1014 08:47:28.336942   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 661e75bbf6b4"
	I1014 08:47:28.369939   15224 command_runner.go:130] ! I1014 15:22:34.688194       1 serving.go:386] Generated self-signed cert in-memory
	I1014 08:47:28.370673   15224 command_runner.go:130] ! W1014 15:22:36.199586       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I1014 08:47:28.370788   15224 command_runner.go:130] ! W1014 15:22:36.199661       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I1014 08:47:28.370788   15224 command_runner.go:130] ! W1014 15:22:36.199675       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	I1014 08:47:28.370788   15224 command_runner.go:130] ! W1014 15:22:36.199681       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 08:47:28.371646   15224 command_runner.go:130] ! I1014 15:22:36.288536       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1014 08:47:28.371646   15224 command_runner.go:130] ! I1014 15:22:36.288649       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:28.371767   15224 command_runner.go:130] ! I1014 15:22:36.292628       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1014 08:47:28.371767   15224 command_runner.go:130] ! I1014 15:22:36.292942       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 08:47:28.371767   15224 command_runner.go:130] ! I1014 15:22:36.293038       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 08:47:28.371767   15224 command_runner.go:130] ! I1014 15:22:36.293102       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 08:47:28.371845   15224 command_runner.go:130] ! W1014 15:22:36.298034       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1014 08:47:28.371896   15224 command_runner.go:130] ! E1014 15:22:36.298090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.371953   15224 command_runner.go:130] ! W1014 15:22:36.298377       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:28.372052   15224 command_runner.go:130] ! E1014 15:22:36.298420       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.372152   15224 command_runner.go:130] ! W1014 15:22:36.298587       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I1014 08:47:28.372181   15224 command_runner.go:130] ! E1014 15:22:36.298642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.372181   15224 command_runner.go:130] ! W1014 15:22:36.298730       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1014 08:47:28.372181   15224 command_runner.go:130] ! E1014 15:22:36.298855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.372181   15224 command_runner.go:130] ! W1014 15:22:36.299272       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1014 08:47:28.372181   15224 command_runner.go:130] ! E1014 15:22:36.299314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.372181   15224 command_runner.go:130] ! W1014 15:22:36.299416       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1014 08:47:28.372181   15224 command_runner.go:130] ! E1014 15:22:36.299618       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.372181   15224 command_runner.go:130] ! W1014 15:22:36.299693       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I1014 08:47:28.372181   15224 command_runner.go:130] ! E1014 15:22:36.299710       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.372181   15224 command_runner.go:130] ! W1014 15:22:36.299857       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1014 08:47:28.372181   15224 command_runner.go:130] ! E1014 15:22:36.299920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.372181   15224 command_runner.go:130] ! W1014 15:22:36.302822       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1014 08:47:28.372181   15224 command_runner.go:130] ! E1014 15:22:36.303096       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1014 08:47:28.372181   15224 command_runner.go:130] ! W1014 15:22:36.303242       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1014 08:47:28.372181   15224 command_runner.go:130] ! E1014 15:22:36.303288       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.372181   15224 command_runner.go:130] ! W1014 15:22:36.303391       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1014 08:47:28.372940   15224 command_runner.go:130] ! E1014 15:22:36.303426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.372940   15224 command_runner.go:130] ! W1014 15:22:36.303605       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:28.372940   15224 command_runner.go:130] ! E1014 15:22:36.303643       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.372940   15224 command_runner.go:130] ! W1014 15:22:36.303739       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:28.372940   15224 command_runner.go:130] ! E1014 15:22:36.303771       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.372940   15224 command_runner.go:130] ! W1014 15:22:36.303825       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1014 08:47:28.372940   15224 command_runner.go:130] ! E1014 15:22:36.303860       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.372940   15224 command_runner.go:130] ! W1014 15:22:36.304041       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:28.372940   15224 command_runner.go:130] ! E1014 15:22:36.304079       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.372940   15224 command_runner.go:130] ! W1014 15:22:37.145637       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:28.372940   15224 command_runner.go:130] ! E1014 15:22:37.146051       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.372940   15224 command_runner.go:130] ! W1014 15:22:37.146415       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1014 08:47:28.372940   15224 command_runner.go:130] ! E1014 15:22:37.146705       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.372940   15224 command_runner.go:130] ! W1014 15:22:37.189116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1014 08:47:28.373579   15224 command_runner.go:130] ! E1014 15:22:37.189252       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.373579   15224 command_runner.go:130] ! W1014 15:22:37.205810       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:28.373579   15224 command_runner.go:130] ! E1014 15:22:37.206152       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.373579   15224 command_runner.go:130] ! W1014 15:22:37.269786       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1014 08:47:28.373579   15224 command_runner.go:130] ! E1014 15:22:37.269856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.373579   15224 command_runner.go:130] ! W1014 15:22:37.270258       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:28.373579   15224 command_runner.go:130] ! E1014 15:22:37.270286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.373579   15224 command_runner.go:130] ! W1014 15:22:37.290857       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1014 08:47:28.373579   15224 command_runner.go:130] ! E1014 15:22:37.290997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.373579   15224 command_runner.go:130] ! W1014 15:22:37.304519       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1014 08:47:28.373579   15224 command_runner.go:130] ! E1014 15:22:37.305020       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1014 08:47:28.373579   15224 command_runner.go:130] ! W1014 15:22:37.328746       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I1014 08:47:28.373579   15224 command_runner.go:130] ! E1014 15:22:37.329049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.374141   15224 command_runner.go:130] ! W1014 15:22:37.338059       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I1014 08:47:28.374224   15224 command_runner.go:130] ! E1014 15:22:37.338269       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.374321   15224 command_runner.go:130] ! W1014 15:22:37.394728       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1014 08:47:28.374349   15224 command_runner.go:130] ! E1014 15:22:37.394803       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.374382   15224 command_runner.go:130] ! W1014 15:22:37.445455       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1014 08:47:28.374382   15224 command_runner.go:130] ! E1014 15:22:37.445612       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.374382   15224 command_runner.go:130] ! W1014 15:22:37.490285       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1014 08:47:28.374382   15224 command_runner.go:130] ! E1014 15:22:37.490817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.374382   15224 command_runner.go:130] ! W1014 15:22:37.537304       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1014 08:47:28.374382   15224 command_runner.go:130] ! E1014 15:22:37.537396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.374382   15224 command_runner.go:130] ! W1014 15:22:37.713713       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:28.374382   15224 command_runner.go:130] ! E1014 15:22:37.713777       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.374382   15224 command_runner.go:130] ! I1014 15:22:39.593596       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 08:47:28.374382   15224 command_runner.go:130] ! I1014 15:43:46.388691       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1014 08:47:28.374382   15224 command_runner.go:130] ! I1014 15:43:46.388783       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1014 08:47:28.374382   15224 command_runner.go:130] ! I1014 15:43:46.389141       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 08:47:28.374382   15224 command_runner.go:130] ! E1014 15:43:46.389549       1 run.go:72] "command failed" err="finished without leader elect"
	I1014 08:47:28.389055   15224 logs.go:123] Gathering logs for container status ...
	I1014 08:47:28.389055   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 08:47:28.450064   15224 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I1014 08:47:28.450064   15224 command_runner.go:130] > 1adddc667bd90       8c811b4aec35f                                                                                         8 seconds ago        Running             busybox                   1                   9092d17516eb3       busybox-7dff88458-vlp7j
	I1014 08:47:28.450064   15224 command_runner.go:130] > 5d223e2e64fcd       c69fa2e9cbf5f                                                                                         8 seconds ago        Running             coredns                   1                   429b989a1a986       coredns-7c65d6cfc9-fs9ct
	I1014 08:47:28.450064   15224 command_runner.go:130] > 9d526b02ee41c       6e38f40d628db                                                                                         26 seconds ago       Running             storage-provisioner       2                   cdcdd532ba136       storage-provisioner
	I1014 08:47:28.450064   15224 command_runner.go:130] > bba035362eb97       3a5bc24055c9e                                                                                         About a minute ago   Running             kindnet-cni               1                   7bcadf1f0885f       kindnet-wqrx6
	I1014 08:47:28.450064   15224 command_runner.go:130] > c76c258568107       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   cdcdd532ba136       storage-provisioner
	I1014 08:47:28.450064   15224 command_runner.go:130] > e83db276dec37       60c005f310ff3                                                                                         About a minute ago   Running             kube-proxy                1                   6f8bdf552734e       kube-proxy-r74dx
	I1014 08:47:28.450064   15224 command_runner.go:130] > 48c8492e231e1       2e96e5913fc06                                                                                         About a minute ago   Running             etcd                      0                   0697a11790e80       etcd-multinode-671000
	I1014 08:47:28.450064   15224 command_runner.go:130] > 8af48c446f7e1       175ffd71cce3d                                                                                         About a minute ago   Running             kube-controller-manager   1                   7bd4c36606eef       kube-controller-manager-multinode-671000
	I1014 08:47:28.450064   15224 command_runner.go:130] > a834664fc8b80       6bab7719df100                                                                                         About a minute ago   Running             kube-apiserver            0                   6155e8be2d5d7       kube-apiserver-multinode-671000
	I1014 08:47:28.450064   15224 command_runner.go:130] > d428685276e1e       9aa1fad941575                                                                                         About a minute ago   Running             kube-scheduler            1                   1d3033f871fb1       kube-scheduler-multinode-671000
	I1014 08:47:28.450064   15224 command_runner.go:130] > cbf0b40e378b9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   06e529266db4b       busybox-7dff88458-vlp7j
	I1014 08:47:28.450064   15224 command_runner.go:130] > d9831e9f8ce8c       c69fa2e9cbf5f                                                                                         24 minutes ago       Exited              coredns                   0                   2f8cc9a218fef       coredns-7c65d6cfc9-fs9ct
	I1014 08:47:28.450064   15224 command_runner.go:130] > fcdf89a3ac8ce       kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387              24 minutes ago       Exited              kindnet-cni               0                   5e48ddcfdf90a       kindnet-wqrx6
	I1014 08:47:28.450064   15224 command_runner.go:130] > ea19428d70363       60c005f310ff3                                                                                         24 minutes ago       Exited              kube-proxy                0                   7144d8ce208cf       kube-proxy-r74dx
	I1014 08:47:28.450064   15224 command_runner.go:130] > 661e75bbf6b46       9aa1fad941575                                                                                         24 minutes ago       Exited              kube-scheduler            0                   2dc78387553ff       kube-scheduler-multinode-671000
	I1014 08:47:28.450064   15224 command_runner.go:130] > 712aad669c9f6       175ffd71cce3d                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   bfdde08319e32       kube-controller-manager-multinode-671000
	I1014 08:47:28.453051   15224 logs.go:123] Gathering logs for describe nodes ...
	I1014 08:47:28.453051   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 08:47:28.601782   15224 command_runner.go:130] > Name:               multinode-671000
	I1014 08:47:28.601867   15224 command_runner.go:130] > Roles:              control-plane
	I1014 08:47:28.601867   15224 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I1014 08:47:28.601867   15224 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I1014 08:47:28.601867   15224 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I1014 08:47:28.601969   15224 command_runner.go:130] >                     kubernetes.io/hostname=multinode-671000
	I1014 08:47:28.601969   15224 command_runner.go:130] >                     kubernetes.io/os=linux
	I1014 08:47:28.601969   15224 command_runner.go:130] >                     minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	I1014 08:47:28.601969   15224 command_runner.go:130] >                     minikube.k8s.io/name=multinode-671000
	I1014 08:47:28.601969   15224 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I1014 08:47:28.601969   15224 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_10_14T08_22_41_0700
	I1014 08:47:28.601969   15224 command_runner.go:130] >                     minikube.k8s.io/version=v1.34.0
	I1014 08:47:28.602042   15224 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I1014 08:47:28.602042   15224 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I1014 08:47:28.602042   15224 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I1014 08:47:28.602042   15224 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I1014 08:47:28.602042   15224 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I1014 08:47:28.602108   15224 command_runner.go:130] > CreationTimestamp:  Mon, 14 Oct 2024 15:22:36 +0000
	I1014 08:47:28.602137   15224 command_runner.go:130] > Taints:             <none>
	I1014 08:47:28.602137   15224 command_runner.go:130] > Unschedulable:      false
	I1014 08:47:28.602137   15224 command_runner.go:130] > Lease:
	I1014 08:47:28.602171   15224 command_runner.go:130] >   HolderIdentity:  multinode-671000
	I1014 08:47:28.602188   15224 command_runner.go:130] >   AcquireTime:     <unset>
	I1014 08:47:28.602219   15224 command_runner.go:130] >   RenewTime:       Mon, 14 Oct 2024 15:47:26 +0000
	I1014 08:47:28.602219   15224 command_runner.go:130] > Conditions:
	I1014 08:47:28.602267   15224 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I1014 08:47:28.602313   15224 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I1014 08:47:28.602313   15224 command_runner.go:130] >   MemoryPressure   False   Mon, 14 Oct 2024 15:46:59 +0000   Mon, 14 Oct 2024 15:22:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I1014 08:47:28.602357   15224 command_runner.go:130] >   DiskPressure     False   Mon, 14 Oct 2024 15:46:59 +0000   Mon, 14 Oct 2024 15:22:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I1014 08:47:28.602357   15224 command_runner.go:130] >   PIDPressure      False   Mon, 14 Oct 2024 15:46:59 +0000   Mon, 14 Oct 2024 15:22:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I1014 08:47:28.602433   15224 command_runner.go:130] >   Ready            True    Mon, 14 Oct 2024 15:46:59 +0000   Mon, 14 Oct 2024 15:46:59 +0000   KubeletReady                 kubelet is posting ready status
	I1014 08:47:28.602433   15224 command_runner.go:130] > Addresses:
	I1014 08:47:28.602433   15224 command_runner.go:130] >   InternalIP:  172.20.106.123
	I1014 08:47:28.602433   15224 command_runner.go:130] >   Hostname:    multinode-671000
	I1014 08:47:28.602433   15224 command_runner.go:130] > Capacity:
	I1014 08:47:28.602433   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:28.602433   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:28.602433   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:28.602433   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:28.602433   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:28.602433   15224 command_runner.go:130] > Allocatable:
	I1014 08:47:28.602433   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:28.602433   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:28.602433   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:28.602433   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:28.602433   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:28.602433   15224 command_runner.go:130] > System Info:
	I1014 08:47:28.602433   15224 command_runner.go:130] >   Machine ID:                 fc389f3b9e2846b4b909cfc8e7984541
	I1014 08:47:28.602433   15224 command_runner.go:130] >   System UUID:                f72bc210-2ad8-1e4f-ad7d-3e75159c3d98
	I1014 08:47:28.602433   15224 command_runner.go:130] >   Boot ID:                    98d09a99-1eff-402d-837f-6cacdc4463d7
	I1014 08:47:28.602433   15224 command_runner.go:130] >   Kernel Version:             5.10.207
	I1014 08:47:28.602433   15224 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I1014 08:47:28.602433   15224 command_runner.go:130] >   Operating System:           linux
	I1014 08:47:28.602433   15224 command_runner.go:130] >   Architecture:               amd64
	I1014 08:47:28.602433   15224 command_runner.go:130] >   Container Runtime Version:  docker://27.3.1
	I1014 08:47:28.602433   15224 command_runner.go:130] >   Kubelet Version:            v1.31.1
	I1014 08:47:28.602433   15224 command_runner.go:130] >   Kube-Proxy Version:         v1.31.1
	I1014 08:47:28.602433   15224 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I1014 08:47:28.602433   15224 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I1014 08:47:28.602433   15224 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I1014 08:47:28.602433   15224 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I1014 08:47:28.602433   15224 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I1014 08:47:28.602433   15224 command_runner.go:130] >   default                     busybox-7dff88458-vlp7j                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I1014 08:47:28.602433   15224 command_runner.go:130] >   kube-system                 coredns-7c65d6cfc9-fs9ct                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     24m
	I1014 08:47:28.602433   15224 command_runner.go:130] >   kube-system                 etcd-multinode-671000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         71s
	I1014 08:47:28.602433   15224 command_runner.go:130] >   kube-system                 kindnet-wqrx6                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	I1014 08:47:28.602433   15224 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-671000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         73s
	I1014 08:47:28.602433   15224 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-671000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I1014 08:47:28.602433   15224 command_runner.go:130] >   kube-system                 kube-proxy-r74dx                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I1014 08:47:28.602433   15224 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-671000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I1014 08:47:28.602433   15224 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I1014 08:47:28.602972   15224 command_runner.go:130] > Allocated resources:
	I1014 08:47:28.602972   15224 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I1014 08:47:28.602972   15224 command_runner.go:130] >   Resource           Requests     Limits
	I1014 08:47:28.602972   15224 command_runner.go:130] >   --------           --------     ------
	I1014 08:47:28.602972   15224 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I1014 08:47:28.603092   15224 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I1014 08:47:28.603092   15224 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I1014 08:47:28.603092   15224 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I1014 08:47:28.603092   15224 command_runner.go:130] > Events:
	I1014 08:47:28.603092   15224 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I1014 08:47:28.603092   15224 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I1014 08:47:28.603154   15224 command_runner.go:130] >   Normal  Starting                 24m                kube-proxy       
	I1014 08:47:28.603154   15224 command_runner.go:130] >   Normal  Starting                 70s                kube-proxy       
	I1014 08:47:28.603154   15224 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I1014 08:47:28.603154   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node multinode-671000 status is now: NodeHasSufficientMemory
	I1014 08:47:28.603218   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node multinode-671000 status is now: NodeHasNoDiskPressure
	I1014 08:47:28.603242   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node multinode-671000 status is now: NodeHasSufficientPID
	I1014 08:47:28.603271   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:28.603271   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-671000 status is now: NodeHasNoDiskPressure
	I1014 08:47:28.603271   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:28.603271   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-671000 status is now: NodeHasSufficientMemory
	I1014 08:47:28.603271   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-671000 status is now: NodeHasSufficientPID
	I1014 08:47:28.603271   15224 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I1014 08:47:28.603271   15224 command_runner.go:130] >   Normal  RegisteredNode           24m                node-controller  Node multinode-671000 event: Registered Node multinode-671000 in Controller
	I1014 08:47:28.603271   15224 command_runner.go:130] >   Normal  NodeReady                24m                kubelet          Node multinode-671000 status is now: NodeReady
	I1014 08:47:28.603271   15224 command_runner.go:130] >   Normal  Starting                 79s                kubelet          Starting kubelet.
	I1014 08:47:28.603271   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:28.603271   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  78s (x8 over 79s)  kubelet          Node multinode-671000 status is now: NodeHasSufficientMemory
	I1014 08:47:28.603271   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    78s (x8 over 79s)  kubelet          Node multinode-671000 status is now: NodeHasNoDiskPressure
	I1014 08:47:28.603271   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     78s (x7 over 79s)  kubelet          Node multinode-671000 status is now: NodeHasSufficientPID
	I1014 08:47:28.603271   15224 command_runner.go:130] >   Normal  RegisteredNode           70s                node-controller  Node multinode-671000 event: Registered Node multinode-671000 in Controller
	I1014 08:47:28.626872   15224 command_runner.go:130] > Name:               multinode-671000-m02
	I1014 08:47:28.626872   15224 command_runner.go:130] > Roles:              <none>
	I1014 08:47:28.626872   15224 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I1014 08:47:28.626872   15224 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I1014 08:47:28.626872   15224 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I1014 08:47:28.626872   15224 command_runner.go:130] >                     kubernetes.io/hostname=multinode-671000-m02
	I1014 08:47:28.626872   15224 command_runner.go:130] >                     kubernetes.io/os=linux
	I1014 08:47:28.626872   15224 command_runner.go:130] >                     minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	I1014 08:47:28.626872   15224 command_runner.go:130] >                     minikube.k8s.io/name=multinode-671000
	I1014 08:47:28.626872   15224 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I1014 08:47:28.626872   15224 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_10_14T08_25_50_0700
	I1014 08:47:28.626872   15224 command_runner.go:130] >                     minikube.k8s.io/version=v1.34.0
	I1014 08:47:28.627902   15224 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I1014 08:47:28.627902   15224 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I1014 08:47:28.627902   15224 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I1014 08:47:28.627902   15224 command_runner.go:130] > CreationTimestamp:  Mon, 14 Oct 2024 15:25:49 +0000
	I1014 08:47:28.627902   15224 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I1014 08:47:28.627902   15224 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I1014 08:47:28.627902   15224 command_runner.go:130] > Unschedulable:      false
	I1014 08:47:28.627902   15224 command_runner.go:130] > Lease:
	I1014 08:47:28.627902   15224 command_runner.go:130] >   HolderIdentity:  multinode-671000-m02
	I1014 08:47:28.627902   15224 command_runner.go:130] >   AcquireTime:     <unset>
	I1014 08:47:28.627902   15224 command_runner.go:130] >   RenewTime:       Mon, 14 Oct 2024 15:43:00 +0000
	I1014 08:47:28.627902   15224 command_runner.go:130] > Conditions:
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I1014 08:47:28.627902   15224 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I1014 08:47:28.627902   15224 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 14 Oct 2024 15:42:08 +0000   Mon, 14 Oct 2024 15:43:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:28.627902   15224 command_runner.go:130] >   DiskPressure     Unknown   Mon, 14 Oct 2024 15:42:08 +0000   Mon, 14 Oct 2024 15:43:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:28.627902   15224 command_runner.go:130] >   PIDPressure      Unknown   Mon, 14 Oct 2024 15:42:08 +0000   Mon, 14 Oct 2024 15:43:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Ready            Unknown   Mon, 14 Oct 2024 15:42:08 +0000   Mon, 14 Oct 2024 15:43:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:28.627902   15224 command_runner.go:130] > Addresses:
	I1014 08:47:28.627902   15224 command_runner.go:130] >   InternalIP:  172.20.109.137
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Hostname:    multinode-671000-m02
	I1014 08:47:28.627902   15224 command_runner.go:130] > Capacity:
	I1014 08:47:28.627902   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:28.627902   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:28.627902   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:28.627902   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:28.627902   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:28.627902   15224 command_runner.go:130] > Allocatable:
	I1014 08:47:28.627902   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:28.627902   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:28.627902   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:28.627902   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:28.627902   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:28.627902   15224 command_runner.go:130] > System Info:
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Machine ID:                 d57a3470ff3c4f03abf025a19c5c23d9
	I1014 08:47:28.627902   15224 command_runner.go:130] >   System UUID:                d36e677f-0d87-a94f-af28-d8324326f88f
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Boot ID:                    33649cab-156e-4789-8adf-a6fc060b3de7
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Kernel Version:             5.10.207
	I1014 08:47:28.627902   15224 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Operating System:           linux
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Architecture:               amd64
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Container Runtime Version:  docker://27.3.1
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Kubelet Version:            v1.31.1
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Kube-Proxy Version:         v1.31.1
	I1014 08:47:28.627902   15224 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I1014 08:47:28.627902   15224 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I1014 08:47:28.627902   15224 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I1014 08:47:28.627902   15224 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I1014 08:47:28.627902   15224 command_runner.go:130] >   default                     busybox-7dff88458-bnqj6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I1014 08:47:28.627902   15224 command_runner.go:130] >   kube-system                 kindnet-rgbjf              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	I1014 08:47:28.627902   15224 command_runner.go:130] >   kube-system                 kube-proxy-kbpjf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I1014 08:47:28.627902   15224 command_runner.go:130] > Allocated resources:
	I1014 08:47:28.627902   15224 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Resource           Requests   Limits
	I1014 08:47:28.627902   15224 command_runner.go:130] >   --------           --------   ------
	I1014 08:47:28.627902   15224 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I1014 08:47:28.627902   15224 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I1014 08:47:28.627902   15224 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I1014 08:47:28.627902   15224 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I1014 08:47:28.627902   15224 command_runner.go:130] > Events:
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I1014 08:47:28.627902   15224 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-671000-m02 status is now: NodeHasSufficientMemory
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-671000-m02 status is now: NodeHasNoDiskPressure
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-671000-m02 status is now: NodeHasSufficientPID
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-671000-m02 event: Registered Node multinode-671000-m02 in Controller
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Normal  NodeReady                21m                kubelet          Node multinode-671000-m02 status is now: NodeReady
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Normal  NodeNotReady             3m43s              node-controller  Node multinode-671000-m02 status is now: NodeNotReady
	I1014 08:47:28.628864   15224 command_runner.go:130] >   Normal  RegisteredNode           70s                node-controller  Node multinode-671000-m02 event: Registered Node multinode-671000-m02 in Controller
	I1014 08:47:28.648896   15224 command_runner.go:130] > Name:               multinode-671000-m03
	I1014 08:47:28.648896   15224 command_runner.go:130] > Roles:              <none>
	I1014 08:47:28.648896   15224 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I1014 08:47:28.648896   15224 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I1014 08:47:28.648896   15224 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I1014 08:47:28.648896   15224 command_runner.go:130] >                     kubernetes.io/hostname=multinode-671000-m03
	I1014 08:47:28.648896   15224 command_runner.go:130] >                     kubernetes.io/os=linux
	I1014 08:47:28.648896   15224 command_runner.go:130] >                     minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	I1014 08:47:28.648896   15224 command_runner.go:130] >                     minikube.k8s.io/name=multinode-671000
	I1014 08:47:28.648896   15224 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I1014 08:47:28.648896   15224 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_10_14T08_41_35_0700
	I1014 08:47:28.648896   15224 command_runner.go:130] >                     minikube.k8s.io/version=v1.34.0
	I1014 08:47:28.648896   15224 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I1014 08:47:28.648896   15224 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I1014 08:47:28.648896   15224 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I1014 08:47:28.648896   15224 command_runner.go:130] > CreationTimestamp:  Mon, 14 Oct 2024 15:41:34 +0000
	I1014 08:47:28.648896   15224 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I1014 08:47:28.648896   15224 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I1014 08:47:28.648896   15224 command_runner.go:130] > Unschedulable:      false
	I1014 08:47:28.648896   15224 command_runner.go:130] > Lease:
	I1014 08:47:28.648896   15224 command_runner.go:130] >   HolderIdentity:  multinode-671000-m03
	I1014 08:47:28.648896   15224 command_runner.go:130] >   AcquireTime:     <unset>
	I1014 08:47:28.648896   15224 command_runner.go:130] >   RenewTime:       Mon, 14 Oct 2024 15:42:46 +0000
	I1014 08:47:28.648896   15224 command_runner.go:130] > Conditions:
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I1014 08:47:28.648896   15224 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I1014 08:47:28.648896   15224 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 14 Oct 2024 15:41:53 +0000   Mon, 14 Oct 2024 15:43:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:28.648896   15224 command_runner.go:130] >   DiskPressure     Unknown   Mon, 14 Oct 2024 15:41:53 +0000   Mon, 14 Oct 2024 15:43:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:28.648896   15224 command_runner.go:130] >   PIDPressure      Unknown   Mon, 14 Oct 2024 15:41:53 +0000   Mon, 14 Oct 2024 15:43:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Ready            Unknown   Mon, 14 Oct 2024 15:41:53 +0000   Mon, 14 Oct 2024 15:43:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:28.648896   15224 command_runner.go:130] > Addresses:
	I1014 08:47:28.648896   15224 command_runner.go:130] >   InternalIP:  172.20.102.29
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Hostname:    multinode-671000-m03
	I1014 08:47:28.648896   15224 command_runner.go:130] > Capacity:
	I1014 08:47:28.648896   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:28.648896   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:28.648896   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:28.648896   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:28.648896   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:28.648896   15224 command_runner.go:130] > Allocatable:
	I1014 08:47:28.648896   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:28.648896   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:28.648896   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:28.648896   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:28.648896   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:28.648896   15224 command_runner.go:130] > System Info:
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Machine ID:                 6da8cf5e96c04d55b9129d0893534bf2
	I1014 08:47:28.648896   15224 command_runner.go:130] >   System UUID:                49616488-815a-3f43-8f47-13dbf29b6ca7
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Boot ID:                    d9fe58fb-ac8e-4430-9563-1b3e9fd35ffd
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Kernel Version:             5.10.207
	I1014 08:47:28.648896   15224 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Operating System:           linux
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Architecture:               amd64
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Container Runtime Version:  docker://27.3.1
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Kubelet Version:            v1.31.1
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Kube-Proxy Version:         v1.31.1
	I1014 08:47:28.648896   15224 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I1014 08:47:28.648896   15224 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I1014 08:47:28.648896   15224 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I1014 08:47:28.648896   15224 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I1014 08:47:28.648896   15224 command_runner.go:130] >   kube-system                 kindnet-5rqxq       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I1014 08:47:28.648896   15224 command_runner.go:130] >   kube-system                 kube-proxy-n6txs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I1014 08:47:28.648896   15224 command_runner.go:130] > Allocated resources:
	I1014 08:47:28.648896   15224 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Resource           Requests   Limits
	I1014 08:47:28.648896   15224 command_runner.go:130] >   --------           --------   ------
	I1014 08:47:28.648896   15224 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I1014 08:47:28.648896   15224 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I1014 08:47:28.648896   15224 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I1014 08:47:28.648896   15224 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I1014 08:47:28.648896   15224 command_runner.go:130] > Events:
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I1014 08:47:28.648896   15224 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Normal  Starting                 5m50s                  kube-proxy       
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-671000-m03 status is now: NodeHasSufficientMemory
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-671000-m03 status is now: NodeHasNoDiskPressure
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-671000-m03 status is now: NodeHasSufficientPID
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-671000-m03 status is now: NodeReady
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Normal  Starting                 5m54s                  kubelet          Starting kubelet.
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m54s (x2 over 5m54s)  kubelet          Node multinode-671000-m03 status is now: NodeHasSufficientMemory
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m54s (x2 over 5m54s)  kubelet          Node multinode-671000-m03 status is now: NodeHasNoDiskPressure
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m54s (x2 over 5m54s)  kubelet          Node multinode-671000-m03 status is now: NodeHasSufficientPID
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m54s                  kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:28.649865   15224 command_runner.go:130] >   Normal  RegisteredNode           5m49s                  node-controller  Node multinode-671000-m03 event: Registered Node multinode-671000-m03 in Controller
	I1014 08:47:28.649865   15224 command_runner.go:130] >   Normal  NodeReady                5m35s                  kubelet          Node multinode-671000-m03 status is now: NodeReady
	I1014 08:47:28.649865   15224 command_runner.go:130] >   Normal  NodeNotReady             3m59s                  node-controller  Node multinode-671000-m03 status is now: NodeNotReady
	I1014 08:47:28.649865   15224 command_runner.go:130] >   Normal  RegisteredNode           70s                    node-controller  Node multinode-671000-m03 event: Registered Node multinode-671000-m03 in Controller
	I1014 08:47:28.660875   15224 logs.go:123] Gathering logs for kube-controller-manager [712aad669c9f] ...
	I1014 08:47:28.660875   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712aad669c9f"
	I1014 08:47:28.689866   15224 command_runner.go:130] ! I1014 15:22:34.276457       1 serving.go:386] Generated self-signed cert in-memory
	I1014 08:47:28.690767   15224 command_runner.go:130] ! I1014 15:22:34.721812       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I1014 08:47:28.690767   15224 command_runner.go:130] ! I1014 15:22:34.722099       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:28.690964   15224 command_runner.go:130] ! I1014 15:22:34.724748       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1014 08:47:28.690964   15224 command_runner.go:130] ! I1014 15:22:34.725085       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 08:47:28.691055   15224 command_runner.go:130] ! I1014 15:22:34.725754       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1014 08:47:28.691884   15224 command_runner.go:130] ! I1014 15:22:34.725985       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 08:47:28.691884   15224 command_runner.go:130] ! I1014 15:22:39.207411       1 controllermanager.go:797] "Started controller" controller="serviceaccount-token-controller"
	I1014 08:47:28.692451   15224 command_runner.go:130] ! I1014 15:22:39.208026       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I1014 08:47:28.692740   15224 command_runner.go:130] ! I1014 15:22:39.207651       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I1014 08:47:28.692905   15224 command_runner.go:130] ! I1014 15:22:39.210064       1 shared_informer.go:320] Caches are synced for tokens
	I1014 08:47:28.692925   15224 command_runner.go:130] ! I1014 15:22:39.224528       1 controllermanager.go:797] "Started controller" controller="taint-eviction-controller"
	I1014 08:47:28.693559   15224 command_runner.go:130] ! I1014 15:22:39.224966       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I1014 08:47:28.693559   15224 command_runner.go:130] ! I1014 15:22:39.225213       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I1014 08:47:28.693559   15224 command_runner.go:130] ! I1014 15:22:39.226734       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I1014 08:47:28.693559   15224 command_runner.go:130] ! I1014 15:22:39.238395       1 controllermanager.go:797] "Started controller" controller="pod-garbage-collector-controller"
	I1014 08:47:28.693559   15224 command_runner.go:130] ! I1014 15:22:39.238610       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I1014 08:47:28.693559   15224 command_runner.go:130] ! I1014 15:22:39.239186       1 shared_informer.go:313] Waiting for caches to sync for GC
	I1014 08:47:28.693559   15224 command_runner.go:130] ! I1014 15:22:39.257957       1 controllermanager.go:797] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I1014 08:47:28.693559   15224 command_runner.go:130] ! I1014 15:22:39.258113       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I1014 08:47:28.694290   15224 command_runner.go:130] ! I1014 15:22:39.264110       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I1014 08:47:28.694317   15224 command_runner.go:130] ! I1014 15:22:39.291746       1 controllermanager.go:797] "Started controller" controller="token-cleaner-controller"
	I1014 08:47:28.694317   15224 command_runner.go:130] ! I1014 15:22:39.291968       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I1014 08:47:28.694461   15224 command_runner.go:130] ! I1014 15:22:39.292012       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I1014 08:47:28.694599   15224 command_runner.go:130] ! I1014 15:22:39.292035       1 shared_informer.go:320] Caches are synced for token_cleaner
	I1014 08:47:28.694825   15224 command_runner.go:130] ! E1014 15:22:39.298368       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I1014 08:47:28.694825   15224 command_runner.go:130] ! I1014 15:22:39.298490       1 controllermanager.go:775] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I1014 08:47:28.694825   15224 command_runner.go:130] ! I1014 15:22:39.320068       1 controllermanager.go:797] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I1014 08:47:28.694825   15224 command_runner.go:130] ! I1014 15:22:39.321579       1 pvc_protection_controller.go:105] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I1014 08:47:28.695209   15224 command_runner.go:130] ! I1014 15:22:39.322507       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I1014 08:47:28.695209   15224 command_runner.go:130] ! I1014 15:22:39.334562       1 controllermanager.go:797] "Started controller" controller="endpointslice-mirroring-controller"
	I1014 08:47:28.695209   15224 command_runner.go:130] ! I1014 15:22:39.335065       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I1014 08:47:28.695209   15224 command_runner.go:130] ! I1014 15:22:39.335174       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I1014 08:47:28.695209   15224 command_runner.go:130] ! I1014 15:22:39.357454       1 controllermanager.go:797] "Started controller" controller="serviceaccount-controller"
	I1014 08:47:28.695209   15224 command_runner.go:130] ! I1014 15:22:39.357636       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I1014 08:47:28.695209   15224 command_runner.go:130] ! I1014 15:22:39.357669       1 shared_informer.go:313] Waiting for caches to sync for service account
	I1014 08:47:28.695209   15224 command_runner.go:130] ! I1014 15:22:39.377687       1 controllermanager.go:797] "Started controller" controller="daemonset-controller"
	I1014 08:47:28.695209   15224 command_runner.go:130] ! I1014 15:22:39.378056       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I1014 08:47:28.695209   15224 command_runner.go:130] ! I1014 15:22:39.378087       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I1014 08:47:28.695209   15224 command_runner.go:130] ! I1014 15:22:39.416186       1 controllermanager.go:797] "Started controller" controller="disruption-controller"
	I1014 08:47:28.696194   15224 command_runner.go:130] ! I1014 15:22:39.416643       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I1014 08:47:28.696194   15224 command_runner.go:130] ! I1014 15:22:39.417022       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I1014 08:47:28.696194   15224 command_runner.go:130] ! I1014 15:22:39.417371       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I1014 08:47:28.696194   15224 command_runner.go:130] ! I1014 15:22:39.469032       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I1014 08:47:28.696194   15224 command_runner.go:130] ! I1014 15:22:39.469507       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I1014 08:47:28.696194   15224 command_runner.go:130] ! I1014 15:22:39.469770       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:28.696194   15224 command_runner.go:130] ! I1014 15:22:39.470779       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-signing-controller"
	I1014 08:47:28.696194   15224 command_runner.go:130] ! I1014 15:22:39.470793       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I1014 08:47:28.696194   15224 command_runner.go:130] ! I1014 15:22:39.471453       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I1014 08:47:28.696194   15224 command_runner.go:130] ! I1014 15:22:39.470805       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:28.696194   15224 command_runner.go:130] ! I1014 15:22:39.470829       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I1014 08:47:28.696194   15224 command_runner.go:130] ! I1014 15:22:39.471957       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I1014 08:47:28.696194   15224 command_runner.go:130] ! I1014 15:22:39.470841       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:28.696194   15224 command_runner.go:130] ! I1014 15:22:39.470861       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I1014 08:47:28.697225   15224 command_runner.go:130] ! I1014 15:22:39.472955       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I1014 08:47:28.697225   15224 command_runner.go:130] ! I1014 15:22:39.470870       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:28.697225   15224 command_runner.go:130] ! I1014 15:22:39.621859       1 controllermanager.go:797] "Started controller" controller="persistentvolume-binder-controller"
	I1014 08:47:28.697225   15224 command_runner.go:130] ! I1014 15:22:39.622638       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I1014 08:47:28.697225   15224 command_runner.go:130] ! I1014 15:22:39.623052       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I1014 08:47:28.697225   15224 command_runner.go:130] ! I1014 15:22:39.777984       1 controllermanager.go:797] "Started controller" controller="root-ca-certificate-publisher-controller"
	I1014 08:47:28.697225   15224 command_runner.go:130] ! I1014 15:22:39.778063       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I1014 08:47:28.697225   15224 command_runner.go:130] ! I1014 15:22:39.778141       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I1014 08:47:28.697225   15224 command_runner.go:130] ! I1014 15:22:39.918879       1 controllermanager.go:797] "Started controller" controller="replicationcontroller-controller"
	I1014 08:47:28.697225   15224 command_runner.go:130] ! I1014 15:22:39.919046       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I1014 08:47:28.697225   15224 command_runner.go:130] ! I1014 15:22:39.919060       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.166453       1 controllermanager.go:797] "Started controller" controller="garbage-collector-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.167822       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.168483       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.168745       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.423412       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.423795       1 controllermanager.go:797] "Started controller" controller="node-ipam-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.424239       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.424496       1 controllermanager.go:775] "Warning: skipping controller" controller="node-route-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.424173       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.425286       1 shared_informer.go:313] Waiting for caches to sync for node
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.570482       1 controllermanager.go:797] "Started controller" controller="persistentvolume-expander-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.570669       1 expand_controller.go:328] "Starting expand controller" logger="persistentvolume-expander-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.570684       1 shared_informer.go:313] Waiting for caches to sync for expand
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.718742       1 controllermanager.go:797] "Started controller" controller="ttl-after-finished-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.718766       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.718828       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.718839       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.875244       1 controllermanager.go:797] "Started controller" controller="endpointslice-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.875390       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.875405       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.022254       1 controllermanager.go:797] "Started controller" controller="replicaset-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.023099       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.023161       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.176342       1 controllermanager.go:797] "Started controller" controller="statefulset-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.176460       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.176471       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.319171       1 controllermanager.go:797] "Started controller" controller="persistentvolume-protection-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.319300       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.319332       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.469263       1 controllermanager.go:797] "Started controller" controller="clusterrole-aggregation-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.469488       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.470311       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.618471       1 controllermanager.go:797] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.618507       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.619582       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.813364       1 controllermanager.go:797] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.813412       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:42.123997       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.124656       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.125147       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.125502       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.125684       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.125715       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.125765       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.125789       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.125821       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.125850       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.125919       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.125938       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! W1014 15:22:42.125970       1 shared_informer.go:597] resyncPeriod 22h30m25.60471532s is smaller than resyncCheckPeriod 23h6m49.635307948s and the informer has already started. Changing it to 23h6m49.635307948s
	I1014 08:47:28.699203   15224 command_runner.go:130] ! W1014 15:22:42.126028       1 shared_informer.go:597] resyncPeriod 22h40m57.132720005s is smaller than resyncCheckPeriod 23h6m49.635307948s and the informer has already started. Changing it to 23h6m49.635307948s
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.126215       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.126353       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.126435       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.126461       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.126498       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.126514       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.126546       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.126572       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.126591       1 controllermanager.go:797] "Started controller" controller="resourcequota-controller"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.127139       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.127191       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.127239       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.377410       1 controllermanager.go:797] "Started controller" controller="namespace-controller"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.378109       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.378533       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.520088       1 controllermanager.go:797] "Started controller" controller="job-controller"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.520194       1 job_controller.go:226] "Starting job controller" logger="job-controller"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.520661       1 shared_informer.go:313] Waiting for caches to sync for job
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.669141       1 controllermanager.go:797] "Started controller" controller="ttl-controller"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.669227       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.669239       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.713738       1 node_lifecycle_controller.go:430] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.713795       1 controllermanager.go:797] "Started controller" controller="node-lifecycle-controller"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.713972       1 node_lifecycle_controller.go:464] "Sending events to api server" logger="node-lifecycle-controller"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.714019       1 node_lifecycle_controller.go:475] "Starting node controller" logger="node-lifecycle-controller"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.714028       1 shared_informer.go:313] Waiting for caches to sync for taint
	I1014 08:47:28.699203   15224 command_runner.go:130] ! E1014 15:22:42.870353       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:42.870400       1 controllermanager.go:775] "Warning: skipping controller" controller="service-lb-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.022018       1 controllermanager.go:797] "Started controller" controller="persistentvolume-attach-detach-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.022670       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.022756       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.169053       1 controllermanager.go:797] "Started controller" controller="ephemeral-volume-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.169165       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.169572       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.319453       1 controllermanager.go:797] "Started controller" controller="endpoints-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.319620       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.319648       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.471065       1 controllermanager.go:797] "Started controller" controller="deployment-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.471807       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.472102       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.621382       1 controllermanager.go:797] "Started controller" controller="cronjob-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.621522       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.621537       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.663267       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-approving-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.663415       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.663427       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.822946       1 controllermanager.go:797] "Started controller" controller="bootstrap-signer-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.822992       1 controllermanager.go:775] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.823061       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.863507       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.863638       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.863659       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.902554       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.913563       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.916687       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000\" does not exist"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.921355       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.921578       1 shared_informer.go:320] Caches are synced for cronjob
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.921709       1 shared_informer.go:320] Caches are synced for job
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.921822       1 shared_informer.go:320] Caches are synced for PV protection
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.922806       1 shared_informer.go:320] Caches are synced for PVC protection
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.922814       1 shared_informer.go:320] Caches are synced for TTL after finished
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.924127       1 shared_informer.go:320] Caches are synced for persistent volume
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.924751       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.925596       1 shared_informer.go:320] Caches are synced for node
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.925653       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.925863       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.925961       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.925971       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.927918       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.933656       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.935993       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.939827       1 shared_informer.go:320] Caches are synced for GC
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.945652       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-671000" podCIDRs=["10.244.0.0/24"]
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.945733       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.946434       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.958217       1 shared_informer.go:320] Caches are synced for service account
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.964566       1 shared_informer.go:320] Caches are synced for HPA
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.970909       1 shared_informer.go:320] Caches are synced for TTL
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.971119       1 shared_informer.go:320] Caches are synced for ephemeral
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.971337       1 shared_informer.go:320] Caches are synced for expand
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.975501       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.976796       1 shared_informer.go:320] Caches are synced for stateful set
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.978344       1 shared_informer.go:320] Caches are synced for crt configmap
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.978435       1 shared_informer.go:320] Caches are synced for daemon sets
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.980084       1 shared_informer.go:320] Caches are synced for namespace
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:44.014728       1 shared_informer.go:320] Caches are synced for taint
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:44.015046       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:44.015932       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.016156       1 node_lifecycle_controller.go:1036] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.020094       1 shared_informer.go:320] Caches are synced for endpoint
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.020640       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.071958       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.103447       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.118642       1 shared_informer.go:320] Caches are synced for disruption
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.123565       1 shared_informer.go:320] Caches are synced for attach detach
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.124082       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.128052       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.164601       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.170410       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.172085       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.172168       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.172762       1 shared_informer.go:320] Caches are synced for deployment
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.173998       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.583260       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.634360       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.669630       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.669841       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:45.450540       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="308.738304ms"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:45.524372       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="73.173482ms"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:45.524478       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="65.397µs"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:46.000395       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="74.724912ms"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:46.017930       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="16.329807ms"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:46.018255       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="275.988µs"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:23:06.558708       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:23:06.579629       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:23:06.601705       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="101.399µs"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:23:06.643522       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="58.099µs"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:23:08.868021       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="148.904µs"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:23:08.936155       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="33.695698ms"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:23:08.939220       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="3.012072ms"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:23:09.023157       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:23:10.921399       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:25:49.920125       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m02\" does not exist"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:25:49.955308       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-671000-m02" podCIDRs=["10.244.1.0/24"]
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:25:49.956041       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:25:49.956493       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:25:50.332394       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:25:50.885049       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:25:54.059204       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000-m02"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:25:54.342262       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:26:00.157293       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:26:18.720546       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:26:18.720611       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:26:18.738467       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:26:19.084143       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:26:20.411603       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:26:44.435156       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="75.721873ms"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:26:44.496244       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="60.852418ms"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:26:44.496945       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="131.501µs"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:26:44.540742       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="117.6µs"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:26:47.680512       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.465591ms"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:26:47.680616       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="27.8µs"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:26:47.878633       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.308091ms"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:26:47.878779       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="30.7µs"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:26:50.724728       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:27:15.823577       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:30:35.115559       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:30:35.116078       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m03\" does not exist"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:30:35.128392       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-671000-m03" podCIDRs=["10.244.2.0/24"]
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:30:35.128677       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:30:35.128924       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:30:35.152829       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:30:35.373296       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:30:35.920577       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:30:39.132287       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:30:39.151825       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:30:45.490553       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:31:04.306000       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:31:04.306453       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:31:04.323636       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:31:05.841789       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:31:09.153752       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:31:56.911043       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:32:21.316935       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:36:11.719246       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:37:02.446841       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:37:26.676097       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:38:59.261991       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:38:59.262728       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:38:59.286871       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:39:04.424423       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:41:24.025444       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:41:24.063975       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:41:29.184402       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:41:29.185577       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:41:34.952323       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m03\" does not exist"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:41:34.952330       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:41:34.966125       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-671000-m03" podCIDRs=["10.244.3.0/24"]
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:41:34.966148       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:41:34.966505       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:41:34.987165       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:41:35.003234       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:41:35.540526       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:41:39.448073       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:41:45.343875       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:41:53.719761       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:41:53.720945       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:41:53.741507       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:41:54.369330       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:42:08.557249       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:42:32.770970       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:43:29.631595       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:43:29.632207       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:43:29.853526       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:43:35.163131       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:43:45.119758       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:43:45.151031       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:43:45.251625       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.269341ms"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:43:45.252472       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="121.1µs"
	I1014 08:47:28.724182   15224 logs.go:123] Gathering logs for Docker ...
	I1014 08:47:28.724182   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 08:47:28.757768   15224 command_runner.go:130] > Oct 14 15:44:44 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1014 08:47:28.757818   15224 command_runner.go:130] > Oct 14 15:44:44 minikube cri-dockerd[221]: time="2024-10-14T15:44:44Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I1014 08:47:28.757877   15224 command_runner.go:130] > Oct 14 15:44:44 minikube cri-dockerd[221]: time="2024-10-14T15:44:44Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1014 08:47:28.757877   15224 command_runner.go:130] > Oct 14 15:44:44 minikube cri-dockerd[221]: time="2024-10-14T15:44:44Z" level=info msg="Start docker client with request timeout 0s"
	I1014 08:47:28.757877   15224 command_runner.go:130] > Oct 14 15:44:44 minikube cri-dockerd[221]: time="2024-10-14T15:44:44Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1014 08:47:28.758090   15224 command_runner.go:130] > Oct 14 15:44:45 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1014 08:47:28.758893   15224 command_runner.go:130] > Oct 14 15:44:45 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1014 08:47:28.758893   15224 command_runner.go:130] > Oct 14 15:44:45 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:47 minikube cri-dockerd[413]: time="2024-10-14T15:44:47Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:47 minikube cri-dockerd[413]: time="2024-10-14T15:44:47Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:47 minikube cri-dockerd[413]: time="2024-10-14T15:44:47Z" level=info msg="Start docker client with request timeout 0s"
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:47 minikube cri-dockerd[413]: time="2024-10-14T15:44:47Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:49 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:49 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:49 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:50 minikube cri-dockerd[421]: time="2024-10-14T15:44:50Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:50 minikube cri-dockerd[421]: time="2024-10-14T15:44:50Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:50 minikube cri-dockerd[421]: time="2024-10-14T15:44:50Z" level=info msg="Start docker client with request timeout 0s"
	I1014 08:47:28.759617   15224 command_runner.go:130] > Oct 14 15:44:50 minikube cri-dockerd[421]: time="2024-10-14T15:44:50Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1014 08:47:28.759617   15224 command_runner.go:130] > Oct 14 15:44:50 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1014 08:47:28.759667   15224 command_runner.go:130] > Oct 14 15:44:50 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1014 08:47:28.759667   15224 command_runner.go:130] > Oct 14 15:44:50 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1014 08:47:28.759667   15224 command_runner.go:130] > Oct 14 15:44:52 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I1014 08:47:28.759667   15224 command_runner.go:130] > Oct 14 15:44:52 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1014 08:47:28.759667   15224 command_runner.go:130] > Oct 14 15:44:52 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I1014 08:47:28.759667   15224 command_runner.go:130] > Oct 14 15:44:52 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1014 08:47:28.759667   15224 command_runner.go:130] > Oct 14 15:44:52 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1014 08:47:28.759916   15224 command_runner.go:130] > Oct 14 15:45:33 multinode-671000 systemd[1]: Starting Docker Application Container Engine...
	I1014 08:47:28.759916   15224 command_runner.go:130] > Oct 14 15:45:33 multinode-671000 dockerd[649]: time="2024-10-14T15:45:33.956984837Z" level=info msg="Starting up"
	I1014 08:47:28.760005   15224 command_runner.go:130] > Oct 14 15:45:33 multinode-671000 dockerd[649]: time="2024-10-14T15:45:33.957924243Z" level=info msg="containerd not running, starting managed containerd"
	I1014 08:47:28.760108   15224 command_runner.go:130] > Oct 14 15:45:33 multinode-671000 dockerd[649]: time="2024-10-14T15:45:33.959335951Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=655
	I1014 08:47:28.760151   15224 command_runner.go:130] > Oct 14 15:45:33 multinode-671000 dockerd[655]: time="2024-10-14T15:45:33.994773864Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	I1014 08:47:28.760222   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.020772213Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I1014 08:47:28.760455   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.021015015Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I1014 08:47:28.760935   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.021095615Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I1014 08:47:28.761054   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.021147816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:28.761054   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.021828519Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:28.761054   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.021976120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:28.761119   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.022248222Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:28.761119   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.022376622Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:28.761181   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.022401523Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:28.761207   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.022414623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:28.761237   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.023030126Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:28.761237   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.023715230Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:28.761275   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.027058949Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:28.761316   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.027212250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:28.761356   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.027346050Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:28.761356   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.027434351Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I1014 08:47:28.761417   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.028070055Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I1014 08:47:28.761459   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.028254556Z" level=info msg="metadata content store policy set" policy=shared
	I1014 08:47:28.761510   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.033722086Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I1014 08:47:28.761510   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.033900187Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I1014 08:47:28.761510   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.033927888Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I1014 08:47:28.761510   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.033944088Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I1014 08:47:28.761587   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.033959488Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I1014 08:47:28.761615   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.034029088Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I1014 08:47:28.761615   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.034638992Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I1014 08:47:28.761615   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.034898493Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I1014 08:47:28.761615   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.034993394Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I1014 08:47:28.761685   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035025394Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I1014 08:47:28.761714   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035042394Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.761714   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035056394Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.761714   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035070894Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.761714   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035091294Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.761814   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035125794Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.761814   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035139394Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.761814   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035152195Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.761874   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035200795Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.761900   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035227495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035242395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035255095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035268595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035283595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035296895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035314495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035330096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035343596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035364096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035376796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035388896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035401196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035419096Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035441896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035454496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035465896Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035512897Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035554297Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035568497Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035580597Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I1014 08:47:28.762662   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035590797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.762736   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035604297Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I1014 08:47:28.762736   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035619397Z" level=info msg="NRI interface is disabled by configuration."
	I1014 08:47:28.762804   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035934999Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I1014 08:47:28.762873   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.036229901Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I1014 08:47:28.762873   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.036295501Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I1014 08:47:28.762873   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.036322201Z" level=info msg="containerd successfully booted in 0.043787s"
	I1014 08:47:28.762949   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.016752326Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1014 08:47:28.762949   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.204043816Z" level=info msg="Loading containers: start."
	I1014 08:47:28.762949   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.545951324Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1014 08:47:28.763041   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.688138626Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I1014 08:47:28.763041   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.780023455Z" level=info msg="Loading containers: done."
	I1014 08:47:28.763109   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.809569125Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I1014 08:47:28.763109   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.809610125Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I1014 08:47:28.763168   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.809633825Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.810490930Z" level=info msg="Daemon has completed initialization"
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.853736479Z" level=info msg="API listen on /var/run/docker.sock"
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.854139881Z" level=info msg="API listen on [::]:2376"
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 systemd[1]: Started Docker Application Container Engine.
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 systemd[1]: Stopping Docker Application Container Engine...
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 dockerd[649]: time="2024-10-14T15:46:01.049459779Z" level=info msg="Processing signal 'terminated'"
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 dockerd[649]: time="2024-10-14T15:46:01.053392981Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 dockerd[649]: time="2024-10-14T15:46:01.053568081Z" level=info msg="Daemon shutdown complete"
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 dockerd[649]: time="2024-10-14T15:46:01.053889681Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 dockerd[649]: time="2024-10-14T15:46:01.054172781Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 systemd[1]: docker.service: Deactivated successfully.
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 systemd[1]: Stopped Docker Application Container Engine.
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 systemd[1]: Starting Docker Application Container Engine...
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:02.109177376Z" level=info msg="Starting up"
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:02.110667577Z" level=info msg="containerd not running, starting managed containerd"
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:02.112008177Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1093
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.143199292Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168149004Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168197704Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168231304Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168244704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168266504Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:28.763758   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168317904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:28.763812   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168445004Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:28.763914   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168531404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:28.763914   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168550204Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:28.763914   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168561104Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:28.764036   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168583904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:28.764036   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168690904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:28.764102   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.171907506Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:28.764102   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172002906Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:28.764170   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172175606Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:28.764170   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172377606Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I1014 08:47:28.764170   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172424606Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I1014 08:47:28.764246   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172461506Z" level=info msg="metadata content store policy set" policy=shared
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172795106Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172882406Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172902406Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172916306Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172930506Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172992206Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173380806Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173626906Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173734806Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173758306Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173794906Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173849506Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173864606Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173878206Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173900507Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173916207Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173928607Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173940507Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173959407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173973007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173985207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173998307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174010307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174023407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174035407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174047207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174077107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174095807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174107607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174191507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174206607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174229207Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174259307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174352207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174370407Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174499607Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174541907Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174556007Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174568207Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I1014 08:47:28.765795   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174578207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.765795   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174598407Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I1014 08:47:28.765795   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174612107Z" level=info msg="NRI interface is disabled by configuration."
	I1014 08:47:28.765795   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174893107Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I1014 08:47:28.765795   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.175192307Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I1014 08:47:28.765795   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.175271607Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I1014 08:47:28.765795   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.175364007Z" level=info msg="containerd successfully booted in 0.032943s"
	I1014 08:47:28.765795   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.157176768Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1014 08:47:28.765795   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.188626383Z" level=info msg="Loading containers: start."
	I1014 08:47:28.765795   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.419822091Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1014 08:47:28.765795   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.533275144Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I1014 08:47:28.765795   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.631380390Z" level=info msg="Loading containers: done."
	I1014 08:47:28.765795   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.656005002Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.656245502Z" level=info msg="Daemon has completed initialization"
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.695426820Z" level=info msg="API listen on /var/run/docker.sock"
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 systemd[1]: Started Docker Application Container Engine.
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.695638120Z" level=info msg="API listen on [::]:2376"
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Start docker client with request timeout 0s"
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Loaded network plugin cni"
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Setting cgroupDriver cgroupfs"
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Start cri-dockerd grpc backend"
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:09Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7c65d6cfc9-fs9ct_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"2f8cc9a218fef4446839b0253827fc2dd89f8eccbd66ab7f0e5123c033f0aacc\""
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:09Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-7dff88458-vlp7j_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"06e529266db4b2cf3ad09285cddeb89d7d11e858b5156cf73f41198c9500b9f2\""
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.635579177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.635817077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.635919877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.636083677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.762883836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.763092036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.763114536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.765440937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1d3033f871fb11cb3095bcf5c5d43615de9685372a45edf226fe52b2f482bc71/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.846488476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.847106376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.847254676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.854373579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.883112593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.884477393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.884514293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.884605993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6155e8be2d5d725a4259a45fe10f7ceb3fc581d528a6486633b563a59f331127/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7bd4c36606eefc91e9ae07ea5683536fc78fdb6f7f752f44d28787b88540a878/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.061102976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.061201476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.061221876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.061393176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0697a11790e806fa4d679e3d97be4c7193692d1cb8f76d882cf3a75aa8e0c238/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.312465294Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.312610794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.312646494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.312762794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.422697746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.422797746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.422816346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.423001046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.500801282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.501016583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.501037383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.504117984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:15Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.472267615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.472571215Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.472597215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.472873315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.475833517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.476013017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.476107917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.476358717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.515050835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.515249635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.515393835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.515565835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6f8bdf552734e59df94d489a081585cdaeeaee729f0a0ae92105117d4d744f3f/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cdcdd532ba1369fe0e866b321f18e0bd99a88a2699c325eea17a070d48f2f024/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.911588321Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.913177522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.913368522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.914060722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7bcadf1f0885ff3411b477a64c6af366c77eb264ce7a761f174b079b11e2e703/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.063841193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.063929693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.063946093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.064242693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.206735160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.207544260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.208633061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.224429668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:47.556424473Z" level=info msg="ignoring event" container=c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:47.559859508Z" level=info msg="shim disconnected" id=c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6 namespace=moby
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:47.560270512Z" level=warning msg="cleaning up after shim disconnected" id=c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6 namespace=moby
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:47.560505714Z" level=info msg="cleaning up dead shim" namespace=moby
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:03 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:03.070959923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:03 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:03.071176624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:03 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:03.071240924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:03 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:03.071756926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.071716036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.071943436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.071968036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.072116937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:47:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/429b989a1a986d23a2e5aee0de1aef1e683a014bebb587981622bd80a3ac5221/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.295865797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.295993998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.296019698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.296117898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:47:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9092d17516eb35243fd461a360605e738727838ee50f870f3bd6c290fd061d20/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.536751498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.537062099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.537100499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.537246499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.821494873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.769793   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.821592273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.769793   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.821611273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.769793   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.821730874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.796399   15224 logs.go:123] Gathering logs for kube-proxy [e83db276dec3] ...
	I1014 08:47:28.796399   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83db276dec3"
	I1014 08:47:28.824931   15224 command_runner.go:130] ! I1014 15:46:17.821967       1 server_linux.go:66] "Using iptables proxy"
	I1014 08:47:28.825159   15224 command_runner.go:130] ! E1014 15:46:17.985243       1 proxier.go:734] "Error cleaning up nftables rules" err=<
	I1014 08:47:28.825159   15224 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I1014 08:47:28.825159   15224 command_runner.go:130] ! 	add table ip kube-proxy
	I1014 08:47:28.825215   15224 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I1014 08:47:28.825215   15224 command_runner.go:130] !  >
	I1014 08:47:28.825215   15224 command_runner.go:130] ! E1014 15:46:18.020523       1 proxier.go:734] "Error cleaning up nftables rules" err=<
	I1014 08:47:28.825215   15224 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I1014 08:47:28.825351   15224 command_runner.go:130] ! 	add table ip6 kube-proxy
	I1014 08:47:28.825351   15224 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I1014 08:47:28.825351   15224 command_runner.go:130] !  >
	I1014 08:47:28.825351   15224 command_runner.go:130] ! I1014 15:46:18.173230       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.20.106.123"]
	I1014 08:47:28.825451   15224 command_runner.go:130] ! E1014 15:46:18.173392       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 08:47:28.825451   15224 command_runner.go:130] ! I1014 15:46:18.286207       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 08:47:28.825451   15224 command_runner.go:130] ! I1014 15:46:18.287289       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 08:47:28.825531   15224 command_runner.go:130] ! I1014 15:46:18.287905       1 server_linux.go:169] "Using iptables Proxier"
	I1014 08:47:28.825575   15224 command_runner.go:130] ! I1014 15:46:18.293792       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 08:47:28.825593   15224 command_runner.go:130] ! I1014 15:46:18.300740       1 server.go:483] "Version info" version="v1.31.1"
	I1014 08:47:28.825593   15224 command_runner.go:130] ! I1014 15:46:18.300778       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:28.825593   15224 command_runner.go:130] ! I1014 15:46:18.305824       1 config.go:199] "Starting service config controller"
	I1014 08:47:28.825648   15224 command_runner.go:130] ! I1014 15:46:18.308209       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 08:47:28.825769   15224 command_runner.go:130] ! I1014 15:46:18.308868       1 config.go:328] "Starting node config controller"
	I1014 08:47:28.825769   15224 command_runner.go:130] ! I1014 15:46:18.314183       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 08:47:28.825769   15224 command_runner.go:130] ! I1014 15:46:18.309398       1 config.go:105] "Starting endpoint slice config controller"
	I1014 08:47:28.825769   15224 command_runner.go:130] ! I1014 15:46:18.317842       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 08:47:28.825837   15224 command_runner.go:130] ! I1014 15:46:18.419882       1 shared_informer.go:320] Caches are synced for node config
	I1014 08:47:28.825837   15224 command_runner.go:130] ! I1014 15:46:18.419918       1 shared_informer.go:320] Caches are synced for service config
	I1014 08:47:28.825837   15224 command_runner.go:130] ! I1014 15:46:18.435586       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 08:47:31.328895   15224 api_server.go:253] Checking apiserver healthz at https://172.20.106.123:8443/healthz ...
	I1014 08:47:31.337693   15224 api_server.go:279] https://172.20.106.123:8443/healthz returned 200:
	ok
	I1014 08:47:31.337875   15224 round_trippers.go:463] GET https://172.20.106.123:8443/version
	I1014 08:47:31.337875   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:31.337875   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:31.337875   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:31.339782   15224 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1014 08:47:31.339892   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:31.339892   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:31.339892   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:31.339892   15224 round_trippers.go:580]     Content-Length: 263
	I1014 08:47:31.339892   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:31 GMT
	I1014 08:47:31.339892   15224 round_trippers.go:580]     Audit-Id: 2fb19e10-d3f3-4081-a9fc-50ab014bc482
	I1014 08:47:31.339892   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:31.339892   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:31.339892   15224 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1014 08:47:31.340067   15224 api_server.go:141] control plane version: v1.31.1
	I1014 08:47:31.340067   15224 api_server.go:131] duration metric: took 3.711038s to wait for apiserver health ...
	I1014 08:47:31.340067   15224 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 08:47:31.350071   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 08:47:31.375786   15224 command_runner.go:130] > a834664fc8b8
	I1014 08:47:31.375786   15224 logs.go:282] 1 containers: [a834664fc8b8]
	I1014 08:47:31.385213   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 08:47:31.411121   15224 command_runner.go:130] > 48c8492e231e
	I1014 08:47:31.411121   15224 logs.go:282] 1 containers: [48c8492e231e]
	I1014 08:47:31.420416   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 08:47:31.445869   15224 command_runner.go:130] > 5d223e2e64fc
	I1014 08:47:31.445869   15224 command_runner.go:130] > d9831e9f8ce8
	I1014 08:47:31.445955   15224 logs.go:282] 2 containers: [5d223e2e64fc d9831e9f8ce8]
	I1014 08:47:31.455088   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 08:47:31.477326   15224 command_runner.go:130] > d428685276e1
	I1014 08:47:31.477410   15224 command_runner.go:130] > 661e75bbf6b4
	I1014 08:47:31.477410   15224 logs.go:282] 2 containers: [d428685276e1 661e75bbf6b4]
	I1014 08:47:31.485447   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 08:47:31.509014   15224 command_runner.go:130] > e83db276dec3
	I1014 08:47:31.509014   15224 command_runner.go:130] > ea19428d7036
	I1014 08:47:31.509014   15224 logs.go:282] 2 containers: [e83db276dec3 ea19428d7036]
	I1014 08:47:31.518018   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 08:47:31.539818   15224 command_runner.go:130] > 8af48c446f7e
	I1014 08:47:31.539818   15224 command_runner.go:130] > 712aad669c9f
	I1014 08:47:31.539818   15224 logs.go:282] 2 containers: [8af48c446f7e 712aad669c9f]
	I1014 08:47:31.548793   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 08:47:31.570120   15224 command_runner.go:130] > bba035362eb9
	I1014 08:47:31.570120   15224 command_runner.go:130] > fcdf89a3ac8c
	I1014 08:47:31.570120   15224 logs.go:282] 2 containers: [bba035362eb9 fcdf89a3ac8c]
	I1014 08:47:31.570120   15224 logs.go:123] Gathering logs for kubelet ...
	I1014 08:47:31.570120   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 08:47:31.600080   15224 command_runner.go:130] > Oct 14 15:46:05 multinode-671000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1014 08:47:31.600214   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1480]: I1014 15:46:06.037054    1480 server.go:486] "Kubelet version" kubeletVersion="v1.31.1"
	I1014 08:47:31.600214   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1480]: I1014 15:46:06.037147    1480 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:31.600214   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1480]: I1014 15:46:06.038385    1480 server.go:929] "Client rotation is on, will bootstrap in background"
	I1014 08:47:31.600282   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1480]: E1014 15:46:06.039788    1480 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I1014 08:47:31.600308   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I1014 08:47:31.600308   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I1014 08:47:31.600308   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I1014 08:47:31.600400   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1014 08:47:31.600400   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1014 08:47:31.600461   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1540]: I1014 15:46:06.835721    1540 server.go:486] "Kubelet version" kubeletVersion="v1.31.1"
	I1014 08:47:31.600485   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1540]: I1014 15:46:06.835931    1540 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:31.600485   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1540]: I1014 15:46:06.836250    1540 server.go:929] "Client rotation is on, will bootstrap in background"
	I1014 08:47:31.600485   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1540]: E1014 15:46:06.836463    1540 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I1014 08:47:31.600485   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I1014 08:47:31.600591   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I1014 08:47:31.600591   15224 command_runner.go:130] > Oct 14 15:46:07 multinode-671000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1014 08:47:31.600591   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1014 08:47:31.600591   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.687712    1622 server.go:486] "Kubelet version" kubeletVersion="v1.31.1"
	I1014 08:47:31.600686   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.688474    1622 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:31.600686   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.689105    1622 server.go:929] "Client rotation is on, will bootstrap in background"
	I1014 08:47:31.600748   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.691939    1622 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I1014 08:47:31.600773   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.718455    1622 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.739709    1622 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.739760    1622 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.744155    1622 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.744395    1622 server.go:812] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.744486    1622 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority"
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.744668    1622 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.744761    1622 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-671000","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.746378    1622 topology_manager.go:138] "Creating topology manager with none policy"
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.746460    1622 container_manager_linux.go:300] "Creating device plugin manager"
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.746633    1622 state_mem.go:36] "Initialized new in-memory state store"
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.749964    1622 kubelet.go:408] "Attempting to sync node with API server"
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.750004    1622 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests"
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.750036    1622 kubelet.go:314] "Adding apiserver pod source"
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.750844    1622 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.756693    1622 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="docker" version="27.3.1" apiVersion="v1"
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.763816    1622 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: W1014 15:46:09.764725    1622 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.766925    1622 server.go:1269] "Started kubelet"
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: W1014 15:46:09.767088    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-671000&limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.767172    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-671000&limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:31.601403   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: W1014 15:46:09.769189    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:31.601403   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.769350    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:31.601403   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.769454    1622 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I1014 08:47:31.601403   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.770134    1622 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I1014 08:47:31.601604   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.772237    1622 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.20.106.123:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-671000.17fe5c47a6bff791  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-671000,UID:multinode-671000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-671000,},FirstTimestamp:2024-10-14 15:46:09.766881169 +0000 UTC m=+0.158153075,LastTimestamp:2024-10-14 15:46:09.766881169 +0000 UTC m=+0.158153075,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-6
71000,}"
	I1014 08:47:31.601604   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.773096    1622 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I1014 08:47:31.601604   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.774576    1622 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I1014 08:47:31.601703   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.777686    1622 server.go:460] "Adding debug handlers to kubelet server"
	I1014 08:47:31.601727   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.780950    1622 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.788697    1622 volume_manager.go:289] "Starting Kubelet Volume Manager"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.789003    1622 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"multinode-671000\" not found"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.789640    1622 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.800447    1622 factory.go:221] Registration of the systemd container factory successfully
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.800536    1622 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.800587    1622 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: W1014 15:46:09.811192    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.811498    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.812017    1622 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-671000?timeout=10s\": dial tcp 172.20.106.123:8443: connect: connection refused" interval="200ms"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.863497    1622 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.863530    1622 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.863554    1622 state_mem.go:36] "Initialized new in-memory state store"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.868881    1622 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.868953    1622 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.868995    1622 policy_none.go:49] "None policy: Start"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.872200    1622 reconciler.go:26] "Reconciler: start to sync state"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.877834    1622 memory_manager.go:170] "Starting memorymanager" policy="None"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.877929    1622 state_mem.go:35] "Initializing new in-memory state store"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.878704    1622 state_mem.go:75] "Updated machine memory state"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.884555    1622 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.885687    1622 eviction_manager.go:189] "Eviction manager: starting control loop"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.885828    1622 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.889524    1622 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-671000\" not found"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.892062    1622 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.900012    1622 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.905094    1622 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.906277    1622 status_manager.go:217] "Starting to sync pod status with apiserver"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.906885    1622 kubelet.go:2321] "Starting kubelet main sync loop"
	I1014 08:47:31.602288   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.907458    1622 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I1014 08:47:31.602288   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: W1014 15:46:09.914061    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:31.602472   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.914371    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:31.602528   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.933056    1622 iptables.go:577] "Could not set up iptables canary" err=<
	I1014 08:47:31.602528   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I1014 08:47:31.602528   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I1014 08:47:31.602631   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I1014 08:47:31.602659   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I1014 08:47:31.602688   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.987581    1622 kubelet_node_status.go:72] "Attempting to register node" node="multinode-671000"
	I1014 08:47:31.602719   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.988812    1622 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.20.106.123:8443: connect: connection refused" node="multinode-671000"
	I1014 08:47:31.602748   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.008458    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f8cc9a218fef4446839b0253827fc2dd89f8eccbd66ab7f0e5123c033f0aacc"
	I1014 08:47:31.602748   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.013887    1622 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-671000?timeout=10s\": dial tcp 172.20.106.123:8443: connect: connection refused" interval="400ms"
	I1014 08:47:31.602748   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.014354    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5733d27d2f1c328dbd19f6392a86e426f344b6f17c65211404fa797e84b69c9"
	I1014 08:47:31.602748   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.014436    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06e529266db4b2cf3ad09285cddeb89d7d11e858b5156cf73f41198c9500b9f2"
	I1014 08:47:31.602748   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.014506    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e48ddcfdf90ad3bfbe621f27c97a331f448947ca77dbd98ab3c9daef2c84e22"
	I1014 08:47:31.602748   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.020161    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dc78387553ff4b78626f5e6aa103a40ec97f42ef49363e27d7d3698cd0df26f"
	I1014 08:47:31.602748   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.035902    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c6be2bd1889b6c5f021362c07c3a88f7f0ff266bb9e8ba4106d666b0f1d267d"
	I1014 08:47:31.602748   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.049024    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1863de70f2316e54fa61ef7c5c6aba94808669b81b1cc811dce745011ee807cb"
	I1014 08:47:31.602748   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.065264    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7144d8ce208cf8c176ad1fc9980a72d450a3d558c4f8f9ee453dea6b22358085"
	I1014 08:47:31.602748   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.079145    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfdde08319e32b93d740933d5ab50829de8f9f3edacce92efe155b4ada4f4212"
	I1014 08:47:31.602748   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.179820    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/864b69f35cb25e9dd5d87a753a055a10-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-671000\" (UID: \"864b69f35cb25e9dd5d87a753a055a10\") " pod="kube-system/kube-apiserver-multinode-671000"
	I1014 08:47:31.602748   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.179915    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b83e31864e0ae98d29d960866012ecb0-k8s-certs\") pod \"kube-controller-manager-multinode-671000\" (UID: \"b83e31864e0ae98d29d960866012ecb0\") " pod="kube-system/kube-controller-manager-multinode-671000"
	I1014 08:47:31.602748   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.179945    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b83e31864e0ae98d29d960866012ecb0-kubeconfig\") pod \"kube-controller-manager-multinode-671000\" (UID: \"b83e31864e0ae98d29d960866012ecb0\") " pod="kube-system/kube-controller-manager-multinode-671000"
	I1014 08:47:31.602748   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.179963    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3e987cfaedc75c39145e8fc131c60c81-kubeconfig\") pod \"kube-scheduler-multinode-671000\" (UID: \"3e987cfaedc75c39145e8fc131c60c81\") " pod="kube-system/kube-scheduler-multinode-671000"
	I1014 08:47:31.602748   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.179984    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/679486a8f27de5805bc2e87fb1920dce-etcd-certs\") pod \"etcd-multinode-671000\" (UID: \"679486a8f27de5805bc2e87fb1920dce\") " pod="kube-system/etcd-multinode-671000"
	I1014 08:47:31.603277   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180012    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/679486a8f27de5805bc2e87fb1920dce-etcd-data\") pod \"etcd-multinode-671000\" (UID: \"679486a8f27de5805bc2e87fb1920dce\") " pod="kube-system/etcd-multinode-671000"
	I1014 08:47:31.603339   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180036    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/864b69f35cb25e9dd5d87a753a055a10-ca-certs\") pod \"kube-apiserver-multinode-671000\" (UID: \"864b69f35cb25e9dd5d87a753a055a10\") " pod="kube-system/kube-apiserver-multinode-671000"
	I1014 08:47:31.603339   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180050    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/864b69f35cb25e9dd5d87a753a055a10-k8s-certs\") pod \"kube-apiserver-multinode-671000\" (UID: \"864b69f35cb25e9dd5d87a753a055a10\") " pod="kube-system/kube-apiserver-multinode-671000"
	I1014 08:47:31.603432   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180068    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b83e31864e0ae98d29d960866012ecb0-ca-certs\") pod \"kube-controller-manager-multinode-671000\" (UID: \"b83e31864e0ae98d29d960866012ecb0\") " pod="kube-system/kube-controller-manager-multinode-671000"
	I1014 08:47:31.603432   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180089    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b83e31864e0ae98d29d960866012ecb0-flexvolume-dir\") pod \"kube-controller-manager-multinode-671000\" (UID: \"b83e31864e0ae98d29d960866012ecb0\") " pod="kube-system/kube-controller-manager-multinode-671000"
	I1014 08:47:31.603520   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180113    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b83e31864e0ae98d29d960866012ecb0-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-671000\" (UID: \"b83e31864e0ae98d29d960866012ecb0\") " pod="kube-system/kube-controller-manager-multinode-671000"
	I1014 08:47:31.603548   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.191857    1622 kubelet_node_status.go:72] "Attempting to register node" node="multinode-671000"
	I1014 08:47:31.603593   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.193195    1622 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.20.106.123:8443: connect: connection refused" node="multinode-671000"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.421148    1622 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-671000?timeout=10s\": dial tcp 172.20.106.123:8443: connect: connection refused" interval="800ms"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.595286    1622 kubelet_node_status.go:72] "Attempting to register node" node="multinode-671000"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.596178    1622 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.20.106.123:8443: connect: connection refused" node="multinode-671000"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: W1014 15:46:10.601172    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.601259    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: W1014 15:46:10.913794    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.913870    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: W1014 15:46:11.078571    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: E1014 15:46:11.078638    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: W1014 15:46:11.151154    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-671000&limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: E1014 15:46:11.151247    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-671000&limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: E1014 15:46:11.223425    1622 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-671000?timeout=10s\": dial tcp 172.20.106.123:8443: connect: connection refused" interval="1.6s"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: I1014 15:46:11.306759    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0697a11790e806fa4d679e3d97be4c7193692d1cb8f76d882cf3a75aa8e0c238"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: I1014 15:46:11.397496    1622 kubelet_node_status.go:72] "Attempting to register node" node="multinode-671000"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: E1014 15:46:11.399409    1622 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.20.106.123:8443: connect: connection refused" node="multinode-671000"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:13 multinode-671000 kubelet[1622]: I1014 15:46:13.001489    1622 kubelet_node_status.go:72] "Attempting to register node" node="multinode-671000"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.316022    1622 kubelet_node_status.go:111] "Node was previously registered" node="multinode-671000"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.316194    1622 kubelet_node_status.go:75] "Successfully registered node" node="multinode-671000"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.316226    1622 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.317405    1622 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I1014 08:47:31.604182   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.318741    1622 setters.go:600] "Node became not ready" node="multinode-671000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-10-14T15:46:15Z","lastTransitionTime":"2024-10-14T15:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I1014 08:47:31.604225   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.671751    1622 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-multinode-671000\" already exists" pod="kube-system/etcd-multinode-671000"
	I1014 08:47:31.604225   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.765668    1622 apiserver.go:52] "Watching apiserver"
	I1014 08:47:31.604225   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.771464    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.604328   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.772813    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.604415   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.774456    1622 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-671000" podUID="80ea37b8-9db1-4a39-9e9e-51c01edadfb1"
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.790436    1622 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.804744    1622 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-671000"
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.875635    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8d14473-8859-4015-84e9-d00656cc00c9-xtables-lock\") pod \"kube-proxy-r74dx\" (UID: \"f8d14473-8859-4015-84e9-d00656cc00c9\") " pod="kube-system/kube-proxy-r74dx"
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.875831    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a508bbf9-7565-4c73-98cb-e9684985c298-xtables-lock\") pod \"kindnet-wqrx6\" (UID: \"a508bbf9-7565-4c73-98cb-e9684985c298\") " pod="kube-system/kindnet-wqrx6"
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.876217    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8d14473-8859-4015-84e9-d00656cc00c9-lib-modules\") pod \"kube-proxy-r74dx\" (UID: \"f8d14473-8859-4015-84e9-d00656cc00c9\") " pod="kube-system/kube-proxy-r74dx"
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.876424    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fde8ff75-bc7f-4db4-b098-c3a08b38d205-tmp\") pod \"storage-provisioner\" (UID: \"fde8ff75-bc7f-4db4-b098-c3a08b38d205\") " pod="kube-system/storage-provisioner"
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.876537    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a508bbf9-7565-4c73-98cb-e9684985c298-cni-cfg\") pod \"kindnet-wqrx6\" (UID: \"a508bbf9-7565-4c73-98cb-e9684985c298\") " pod="kube-system/kindnet-wqrx6"
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.876562    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a508bbf9-7565-4c73-98cb-e9684985c298-lib-modules\") pod \"kindnet-wqrx6\" (UID: \"a508bbf9-7565-4c73-98cb-e9684985c298\") " pod="kube-system/kindnet-wqrx6"
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.877886    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.877952    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:16.377930736 +0000 UTC m=+6.769202642 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.896550    1622 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.904462    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.904557    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.904737    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:16.404658149 +0000 UTC m=+6.795930055 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.919872    1622 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cf38ccc62eb74f6e658e1f66ae8cab1" path="/var/lib/kubelet/pods/3cf38ccc62eb74f6e658e1f66ae8cab1/volumes"
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.921055    1622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-671000" podStartSLOduration=0.921039556 podStartE2EDuration="921.039556ms" podCreationTimestamp="2024-10-14 15:46:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-14 15:46:15.920643156 +0000 UTC m=+6.311915162" watchObservedRunningTime="2024-10-14 15:46:15.921039556 +0000 UTC m=+6.312311562"
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.921516    1622 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="778fdb620bffec66f911bf24e3c8210b" path="/var/lib/kubelet/pods/778fdb620bffec66f911bf24e3c8210b/volumes"
	I1014 08:47:31.604977   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.380142    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:31.605166   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.380233    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:17.380214172 +0000 UTC m=+7.771486078 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:31.605229   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.480798    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.605229   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.480831    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.605304   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.480915    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:17.480897019 +0000 UTC m=+7.872168925 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.605370   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: I1014 15:46:16.655226    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f8bdf552734e59df94d489a081585cdaeeaee729f0a0ae92105117d4d744f3f"
	I1014 08:47:31.605396   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: I1014 15:46:16.670380    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdcdd532ba1369fe0e866b321f18e0bd99a88a2699c325eea17a070d48f2f024"
	I1014 08:47:31.605493   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: I1014 15:46:16.981444    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bcadf1f0885ff3411b477a64c6af366c77eb264ce7a761f174b079b11e2e703"
	I1014 08:47:31.605542   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: I1014 15:46:16.981500    1622 kubelet.go:1895] "Trying to delete pod" pod="kube-system/etcd-multinode-671000" podUID="56dfdf16-1224-41e3-94de-9d7f4021a17d"
	I1014 08:47:31.605565   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.982831    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.605565   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: I1014 15:46:17.011276    1622 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-671000"
	I1014 08:47:31.605623   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.388224    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:31.605673   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.388370    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:19.388351245 +0000 UTC m=+9.779623151 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:31.605673   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.489591    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.605673   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.489649    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.605673   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.489828    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:19.489808492 +0000 UTC m=+9.881080398 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.605673   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.915482    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.605673   15224 command_runner.go:130] > Oct 14 15:46:18 multinode-671000 kubelet[1622]: I1014 15:46:18.163696    1622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-671000" podStartSLOduration=1.163677409 podStartE2EDuration="1.163677409s" podCreationTimestamp="2024-10-14 15:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-14 15:46:18.133766095 +0000 UTC m=+8.525038101" watchObservedRunningTime="2024-10-14 15:46:18.163677409 +0000 UTC m=+8.554949415"
	I1014 08:47:31.605673   15224 command_runner.go:130] > Oct 14 15:46:18 multinode-671000 kubelet[1622]: E1014 15:46:18.908674    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.605673   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.405477    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:31.605673   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.405614    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:23.405594191 +0000 UTC m=+13.796866097 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:31.605673   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.506858    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.605673   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.507035    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.605673   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.507122    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:23.507105839 +0000 UTC m=+13.898377845 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.605673   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.931507    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.605673   15224 command_runner.go:130] > Oct 14 15:46:20 multinode-671000 kubelet[1622]: E1014 15:46:20.907760    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.605673   15224 command_runner.go:130] > Oct 14 15:46:21 multinode-671000 kubelet[1622]: E1014 15:46:21.908994    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.606297   15224 command_runner.go:130] > Oct 14 15:46:22 multinode-671000 kubelet[1622]: E1014 15:46:22.908657    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.606349   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.462111    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:31.606466   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.462203    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:31.462185592 +0000 UTC m=+21.853457598 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:31.606466   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.562508    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.606466   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.562563    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.606466   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.562768    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:31.562650785 +0000 UTC m=+21.953922691 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.606466   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.910119    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.606466   15224 command_runner.go:130] > Oct 14 15:46:24 multinode-671000 kubelet[1622]: E1014 15:46:24.908917    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.606466   15224 command_runner.go:130] > Oct 14 15:46:25 multinode-671000 kubelet[1622]: E1014 15:46:25.909505    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.606466   15224 command_runner.go:130] > Oct 14 15:46:26 multinode-671000 kubelet[1622]: E1014 15:46:26.907750    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.607068   15224 command_runner.go:130] > Oct 14 15:46:27 multinode-671000 kubelet[1622]: E1014 15:46:27.908822    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.607068   15224 command_runner.go:130] > Oct 14 15:46:28 multinode-671000 kubelet[1622]: E1014 15:46:28.908219    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.607068   15224 command_runner.go:130] > Oct 14 15:46:29 multinode-671000 kubelet[1622]: E1014 15:46:29.910218    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.607068   15224 command_runner.go:130] > Oct 14 15:46:30 multinode-671000 kubelet[1622]: E1014 15:46:30.908259    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.607068   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.541520    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:31.607068   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.541653    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:47.541634578 +0000 UTC m=+37.932906484 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:31.607068   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.641930    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.607068   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.641961    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.607068   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.642009    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:47.641990935 +0000 UTC m=+38.033262841 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.607068   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.908383    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.607068   15224 command_runner.go:130] > Oct 14 15:46:32 multinode-671000 kubelet[1622]: E1014 15:46:32.908527    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.607068   15224 command_runner.go:130] > Oct 14 15:46:33 multinode-671000 kubelet[1622]: E1014 15:46:33.910838    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.607614   15224 command_runner.go:130] > Oct 14 15:46:34 multinode-671000 kubelet[1622]: E1014 15:46:34.908180    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.607657   15224 command_runner.go:130] > Oct 14 15:46:35 multinode-671000 kubelet[1622]: E1014 15:46:35.908574    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.607657   15224 command_runner.go:130] > Oct 14 15:46:36 multinode-671000 kubelet[1622]: E1014 15:46:36.907722    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.607796   15224 command_runner.go:130] > Oct 14 15:46:37 multinode-671000 kubelet[1622]: E1014 15:46:37.907861    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.607876   15224 command_runner.go:130] > Oct 14 15:46:38 multinode-671000 kubelet[1622]: E1014 15:46:38.908728    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.607876   15224 command_runner.go:130] > Oct 14 15:46:39 multinode-671000 kubelet[1622]: E1014 15:46:39.908994    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.607959   15224 command_runner.go:130] > Oct 14 15:46:40 multinode-671000 kubelet[1622]: E1014 15:46:40.908676    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.608010   15224 command_runner.go:130] > Oct 14 15:46:41 multinode-671000 kubelet[1622]: E1014 15:46:41.909525    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.608028   15224 command_runner.go:130] > Oct 14 15:46:42 multinode-671000 kubelet[1622]: E1014 15:46:42.908679    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.608089   15224 command_runner.go:130] > Oct 14 15:46:43 multinode-671000 kubelet[1622]: E1014 15:46:43.908615    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.608147   15224 command_runner.go:130] > Oct 14 15:46:44 multinode-671000 kubelet[1622]: E1014 15:46:44.908884    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.608182   15224 command_runner.go:130] > Oct 14 15:46:45 multinode-671000 kubelet[1622]: E1014 15:46:45.908370    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.608254   15224 command_runner.go:130] > Oct 14 15:46:46 multinode-671000 kubelet[1622]: E1014 15:46:46.909263    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.608281   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.573240    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:31.608281   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.573353    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:47:19.573334644 +0000 UTC m=+69.964606650 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:31.608281   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.673810    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.608281   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.673907    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.608281   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.674014    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:47:19.673994259 +0000 UTC m=+70.065266165 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.608281   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.908883    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.608281   15224 command_runner.go:130] > Oct 14 15:46:48 multinode-671000 kubelet[1622]: I1014 15:46:48.486803    1622 scope.go:117] "RemoveContainer" containerID="3d8b7bae48a59c755a1ffda14e7fdd0c2302b394db67b7de21fd5b819dad243b"
	I1014 08:47:31.608281   15224 command_runner.go:130] > Oct 14 15:46:48 multinode-671000 kubelet[1622]: I1014 15:46:48.487259    1622 scope.go:117] "RemoveContainer" containerID="c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6"
	I1014 08:47:31.608281   15224 command_runner.go:130] > Oct 14 15:46:48 multinode-671000 kubelet[1622]: E1014 15:46:48.487448    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(fde8ff75-bc7f-4db4-b098-c3a08b38d205)\"" pod="kube-system/storage-provisioner" podUID="fde8ff75-bc7f-4db4-b098-c3a08b38d205"
	I1014 08:47:31.608281   15224 command_runner.go:130] > Oct 14 15:46:48 multinode-671000 kubelet[1622]: E1014 15:46:48.908732    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.608281   15224 command_runner.go:130] > Oct 14 15:46:49 multinode-671000 kubelet[1622]: E1014 15:46:49.908877    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.608839   15224 command_runner.go:130] > Oct 14 15:46:50 multinode-671000 kubelet[1622]: E1014 15:46:50.907718    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.608919   15224 command_runner.go:130] > Oct 14 15:46:51 multinode-671000 kubelet[1622]: E1014 15:46:51.909552    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.608919   15224 command_runner.go:130] > Oct 14 15:46:52 multinode-671000 kubelet[1622]: E1014 15:46:52.908818    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.609126   15224 command_runner.go:130] > Oct 14 15:46:53 multinode-671000 kubelet[1622]: E1014 15:46:53.908389    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.609126   15224 command_runner.go:130] > Oct 14 15:46:54 multinode-671000 kubelet[1622]: E1014 15:46:54.908089    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.609250   15224 command_runner.go:130] > Oct 14 15:46:55 multinode-671000 kubelet[1622]: E1014 15:46:55.908582    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.609250   15224 command_runner.go:130] > Oct 14 15:46:56 multinode-671000 kubelet[1622]: E1014 15:46:56.908839    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.609250   15224 command_runner.go:130] > Oct 14 15:46:57 multinode-671000 kubelet[1622]: E1014 15:46:57.909489    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.609250   15224 command_runner.go:130] > Oct 14 15:46:58 multinode-671000 kubelet[1622]: E1014 15:46:58.908804    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.609250   15224 command_runner.go:130] > Oct 14 15:46:59 multinode-671000 kubelet[1622]: I1014 15:46:59.853068    1622 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
	I1014 08:47:31.609250   15224 command_runner.go:130] > Oct 14 15:47:02 multinode-671000 kubelet[1622]: I1014 15:47:02.908981    1622 scope.go:117] "RemoveContainer" containerID="c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6"
	I1014 08:47:31.609250   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]: I1014 15:47:09.901385    1622 scope.go:117] "RemoveContainer" containerID="0b5a6e440d7b67606ed0a4dfa4d07715b1fd7e6f53bc0b8779f86a33c5baf6e9"
	I1014 08:47:31.609250   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]: I1014 15:47:09.946936    1622 scope.go:117] "RemoveContainer" containerID="1ba3cd8bbd5963097f4d674fc98eca21e1a710f5a150a067747aa4e6c922d2fe"
	I1014 08:47:31.609250   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]: E1014 15:47:09.949713    1622 iptables.go:577] "Could not set up iptables canary" err=<
	I1014 08:47:31.609250   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I1014 08:47:31.609250   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I1014 08:47:31.609250   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I1014 08:47:31.609250   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I1014 08:47:31.651929   15224 logs.go:123] Gathering logs for coredns [5d223e2e64fc] ...
	I1014 08:47:31.651929   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d223e2e64fc"
	I1014 08:47:31.683801   15224 command_runner.go:130] > .:53
	I1014 08:47:31.683896   15224 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	I1014 08:47:31.683896   15224 command_runner.go:130] > CoreDNS-1.11.3
	I1014 08:47:31.683896   15224 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I1014 08:47:31.683896   15224 command_runner.go:130] > [INFO] 127.0.0.1:42996 - 9104 "HINFO IN 5434967794797104596.5472118418078127170. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.148386647s
	I1014 08:47:31.684185   15224 logs.go:123] Gathering logs for kube-controller-manager [712aad669c9f] ...
	I1014 08:47:31.684185   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712aad669c9f"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:34.276457       1 serving.go:386] Generated self-signed cert in-memory
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:34.721812       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:34.722099       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:34.724748       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:34.725085       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:34.725754       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:34.725985       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.207411       1 controllermanager.go:797] "Started controller" controller="serviceaccount-token-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.208026       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.207651       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.210064       1 shared_informer.go:320] Caches are synced for tokens
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.224528       1 controllermanager.go:797] "Started controller" controller="taint-eviction-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.224966       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.225213       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.226734       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.238395       1 controllermanager.go:797] "Started controller" controller="pod-garbage-collector-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.238610       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.239186       1 shared_informer.go:313] Waiting for caches to sync for GC
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.257957       1 controllermanager.go:797] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.258113       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.264110       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.291746       1 controllermanager.go:797] "Started controller" controller="token-cleaner-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.291968       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.292012       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.292035       1 shared_informer.go:320] Caches are synced for token_cleaner
	I1014 08:47:31.713967   15224 command_runner.go:130] ! E1014 15:22:39.298368       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.298490       1 controllermanager.go:775] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.320068       1 controllermanager.go:797] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.321579       1 pvc_protection_controller.go:105] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.322507       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.334562       1 controllermanager.go:797] "Started controller" controller="endpointslice-mirroring-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.335065       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.335174       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.357454       1 controllermanager.go:797] "Started controller" controller="serviceaccount-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.357636       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.357669       1 shared_informer.go:313] Waiting for caches to sync for service account
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.377687       1 controllermanager.go:797] "Started controller" controller="daemonset-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.378056       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.378087       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.416186       1 controllermanager.go:797] "Started controller" controller="disruption-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.416643       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.417022       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.417371       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.469032       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.469507       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.469770       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.470779       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-signing-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.470793       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.471453       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.470805       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.470829       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.471957       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.470841       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.470861       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.472955       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.470870       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.621859       1 controllermanager.go:797] "Started controller" controller="persistentvolume-binder-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.622638       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.623052       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.777984       1 controllermanager.go:797] "Started controller" controller="root-ca-certificate-publisher-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.778063       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.778141       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.918879       1 controllermanager.go:797] "Started controller" controller="replicationcontroller-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.919046       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.919060       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.166453       1 controllermanager.go:797] "Started controller" controller="garbage-collector-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.167822       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.168483       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.168745       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.423412       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.423795       1 controllermanager.go:797] "Started controller" controller="node-ipam-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.424239       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.424496       1 controllermanager.go:775] "Warning: skipping controller" controller="node-route-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.424173       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.425286       1 shared_informer.go:313] Waiting for caches to sync for node
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.570482       1 controllermanager.go:797] "Started controller" controller="persistentvolume-expander-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.570669       1 expand_controller.go:328] "Starting expand controller" logger="persistentvolume-expander-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.570684       1 shared_informer.go:313] Waiting for caches to sync for expand
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.718742       1 controllermanager.go:797] "Started controller" controller="ttl-after-finished-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.718766       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.718828       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.718839       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.875244       1 controllermanager.go:797] "Started controller" controller="endpointslice-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.875390       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:40.875405       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.022254       1 controllermanager.go:797] "Started controller" controller="replicaset-controller"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.023099       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.023161       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.176342       1 controllermanager.go:797] "Started controller" controller="statefulset-controller"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.176460       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.176471       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.319171       1 controllermanager.go:797] "Started controller" controller="persistentvolume-protection-controller"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.319300       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.319332       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.469263       1 controllermanager.go:797] "Started controller" controller="clusterrole-aggregation-controller"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.469488       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.470311       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.618471       1 controllermanager.go:797] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.618507       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.619582       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.813364       1 controllermanager.go:797] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.813412       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.123997       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.124656       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.125147       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.125502       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.125684       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.125715       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.125765       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.125789       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.125821       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.125850       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.125919       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.125938       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! W1014 15:22:42.125970       1 shared_informer.go:597] resyncPeriod 22h30m25.60471532s is smaller than resyncCheckPeriod 23h6m49.635307948s and the informer has already started. Changing it to 23h6m49.635307948s
	I1014 08:47:31.715966   15224 command_runner.go:130] ! W1014 15:22:42.126028       1 shared_informer.go:597] resyncPeriod 22h40m57.132720005s is smaller than resyncCheckPeriod 23h6m49.635307948s and the informer has already started. Changing it to 23h6m49.635307948s
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.126215       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.126353       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.126435       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.126461       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.126498       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.126514       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.126546       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.126572       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.126591       1 controllermanager.go:797] "Started controller" controller="resourcequota-controller"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.127139       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.127191       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.127239       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.377410       1 controllermanager.go:797] "Started controller" controller="namespace-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.378109       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.378533       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.520088       1 controllermanager.go:797] "Started controller" controller="job-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.520194       1 job_controller.go:226] "Starting job controller" logger="job-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.520661       1 shared_informer.go:313] Waiting for caches to sync for job
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.669141       1 controllermanager.go:797] "Started controller" controller="ttl-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.669227       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.669239       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.713738       1 node_lifecycle_controller.go:430] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.713795       1 controllermanager.go:797] "Started controller" controller="node-lifecycle-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.713972       1 node_lifecycle_controller.go:464] "Sending events to api server" logger="node-lifecycle-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.714019       1 node_lifecycle_controller.go:475] "Starting node controller" logger="node-lifecycle-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.714028       1 shared_informer.go:313] Waiting for caches to sync for taint
	I1014 08:47:31.716999   15224 command_runner.go:130] ! E1014 15:22:42.870353       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.870400       1 controllermanager.go:775] "Warning: skipping controller" controller="service-lb-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.022018       1 controllermanager.go:797] "Started controller" controller="persistentvolume-attach-detach-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.022670       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.022756       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.169053       1 controllermanager.go:797] "Started controller" controller="ephemeral-volume-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.169165       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.169572       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.319453       1 controllermanager.go:797] "Started controller" controller="endpoints-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.319620       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.319648       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.471065       1 controllermanager.go:797] "Started controller" controller="deployment-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.471807       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.472102       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.621382       1 controllermanager.go:797] "Started controller" controller="cronjob-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.621522       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.621537       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.663267       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-approving-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.663415       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.663427       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.822946       1 controllermanager.go:797] "Started controller" controller="bootstrap-signer-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.822992       1 controllermanager.go:775] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.823061       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.863507       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.863638       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.863659       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.902554       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.913563       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.916687       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000\" does not exist"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.921355       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.921578       1 shared_informer.go:320] Caches are synced for cronjob
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.921709       1 shared_informer.go:320] Caches are synced for job
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.921822       1 shared_informer.go:320] Caches are synced for PV protection
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.922806       1 shared_informer.go:320] Caches are synced for PVC protection
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.922814       1 shared_informer.go:320] Caches are synced for TTL after finished
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.924127       1 shared_informer.go:320] Caches are synced for persistent volume
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.924751       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.925596       1 shared_informer.go:320] Caches are synced for node
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.925653       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.925863       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.925961       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.925971       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.927918       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.933656       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.935993       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.939827       1 shared_informer.go:320] Caches are synced for GC
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.945652       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-671000" podCIDRs=["10.244.0.0/24"]
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.945733       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.946434       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.958217       1 shared_informer.go:320] Caches are synced for service account
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.964566       1 shared_informer.go:320] Caches are synced for HPA
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.970909       1 shared_informer.go:320] Caches are synced for TTL
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.971119       1 shared_informer.go:320] Caches are synced for ephemeral
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.971337       1 shared_informer.go:320] Caches are synced for expand
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.975501       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.976796       1 shared_informer.go:320] Caches are synced for stateful set
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.978344       1 shared_informer.go:320] Caches are synced for crt configmap
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.978435       1 shared_informer.go:320] Caches are synced for daemon sets
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.980084       1 shared_informer.go:320] Caches are synced for namespace
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.014728       1 shared_informer.go:320] Caches are synced for taint
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.015046       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.015932       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.016156       1 node_lifecycle_controller.go:1036] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.020094       1 shared_informer.go:320] Caches are synced for endpoint
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.020640       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.071958       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.103447       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.118642       1 shared_informer.go:320] Caches are synced for disruption
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.123565       1 shared_informer.go:320] Caches are synced for attach detach
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.124082       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.128052       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.164601       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.170410       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.172085       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.172168       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.172762       1 shared_informer.go:320] Caches are synced for deployment
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.173998       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.583260       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.634360       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.669630       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.669841       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:45.450540       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="308.738304ms"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:45.524372       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="73.173482ms"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:45.524478       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="65.397µs"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:46.000395       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="74.724912ms"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:46.017930       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="16.329807ms"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:46.018255       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="275.988µs"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:23:06.558708       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:23:06.579629       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:23:06.601705       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="101.399µs"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:23:06.643522       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="58.099µs"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:23:08.868021       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="148.904µs"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:23:08.936155       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="33.695698ms"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:23:08.939220       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="3.012072ms"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:23:09.023157       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:23:10.921399       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:25:49.920125       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m02\" does not exist"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:25:49.955308       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-671000-m02" podCIDRs=["10.244.1.0/24"]
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:25:49.956041       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:25:49.956493       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:25:50.332394       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:25:50.885049       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:25:54.059204       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000-m02"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:25:54.342262       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:26:00.157293       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:26:18.720546       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:26:18.720611       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:26:18.738467       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:26:19.084143       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:26:20.411603       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:26:44.435156       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="75.721873ms"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:26:44.496244       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="60.852418ms"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:26:44.496945       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="131.501µs"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:26:44.540742       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="117.6µs"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:26:47.680512       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.465591ms"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:26:47.680616       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="27.8µs"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:26:47.878633       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.308091ms"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:26:47.878779       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="30.7µs"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:26:50.724728       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:27:15.823577       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:30:35.115559       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:30:35.116078       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m03\" does not exist"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:30:35.128392       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-671000-m03" podCIDRs=["10.244.2.0/24"]
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:30:35.128677       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:30:35.128924       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:30:35.152829       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:30:35.373296       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:30:35.920577       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:30:39.132287       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:30:39.151825       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:30:45.490553       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:31:04.306000       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:31:04.306453       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:31:04.323636       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:31:05.841789       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:31:09.153752       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:31:56.911043       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:32:21.316935       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:36:11.719246       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:37:02.446841       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:37:26.676097       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:38:59.261991       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:38:59.262728       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:38:59.286871       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:39:04.424423       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:24.025444       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:24.063975       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:29.184402       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:29.185577       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:34.952323       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m03\" does not exist"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:34.952330       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:34.966125       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-671000-m03" podCIDRs=["10.244.3.0/24"]
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:34.966148       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:34.966505       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:34.987165       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:35.003234       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:35.540526       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:39.448073       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:45.343875       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:53.719761       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:53.720945       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:53.741507       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:54.369330       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:42:08.557249       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:42:32.770970       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:43:29.631595       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:43:29.632207       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:43:29.853526       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:43:35.163131       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:43:45.119758       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:43:45.151031       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:43:45.251625       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.269341ms"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:43:45.252472       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="121.1µs"
	I1014 08:47:31.742011   15224 logs.go:123] Gathering logs for kindnet [bba035362eb9] ...
	I1014 08:47:31.742011   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bba035362eb9"
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:46:18.000845       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:46:18.015386       1 main.go:139] hostIP = 172.20.106.123
	I1014 08:47:31.772870   15224 command_runner.go:130] ! podIP = 172.20.106.123
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:46:18.015613       1 main.go:148] setting mtu 1500 for CNI 
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:46:18.015630       1 main.go:178] kindnetd IP family: "ipv4"
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:46:18.015641       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:46:18.919987       1 main.go:238] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-37: Error: Could not process rule: Operation not supported
	I1014 08:47:31.772870   15224 command_runner.go:130] ! add table inet kube-network-policies
	I1014 08:47:31.772870   15224 command_runner.go:130] ! ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	I1014 08:47:31.772870   15224 command_runner.go:130] ! , skipping network policies
	I1014 08:47:31.772870   15224 command_runner.go:130] ! W1014 15:46:48.934772       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1014 08:47:31.772870   15224 command_runner.go:130] ! E1014 15:46:48.935157       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError"
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:46:58.925780       1 main.go:296] Handling node with IPs: map[172.20.106.123:{}]
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:46:58.926393       1 main.go:300] handling current node
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:46:58.927562       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:46:58.927665       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:46:58.928645       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.20.109.137 Flags: [] Table: 0 Realm: 0} 
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:46:58.929412       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:46:58.929466       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:46:58.929555       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.20.102.29 Flags: [] Table: 0 Realm: 0} 
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:47:08.930440       1 main.go:296] Handling node with IPs: map[172.20.106.123:{}]
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:47:08.930586       1 main.go:300] handling current node
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:47:08.930648       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:47:08.930739       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:47:08.931080       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:47:08.931268       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:47:18.921538       1 main.go:296] Handling node with IPs: map[172.20.106.123:{}]
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:47:18.921639       1 main.go:300] handling current node
	I1014 08:47:31.773398   15224 command_runner.go:130] ! I1014 15:47:18.921689       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:31.773398   15224 command_runner.go:130] ! I1014 15:47:18.921698       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:31.773398   15224 command_runner.go:130] ! I1014 15:47:18.922117       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:31.773462   15224 command_runner.go:130] ! I1014 15:47:18.922190       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:31.773462   15224 command_runner.go:130] ! I1014 15:47:28.925595       1 main.go:296] Handling node with IPs: map[172.20.106.123:{}]
	I1014 08:47:31.773462   15224 command_runner.go:130] ! I1014 15:47:28.925732       1 main.go:300] handling current node
	I1014 08:47:31.773462   15224 command_runner.go:130] ! I1014 15:47:28.925759       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:31.773462   15224 command_runner.go:130] ! I1014 15:47:28.925767       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:31.773542   15224 command_runner.go:130] ! I1014 15:47:28.926918       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:31.773569   15224 command_runner.go:130] ! I1014 15:47:28.927018       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:31.774433   15224 logs.go:123] Gathering logs for Docker ...
	I1014 08:47:31.774433   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:44 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:44 minikube cri-dockerd[221]: time="2024-10-14T15:44:44Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:44 minikube cri-dockerd[221]: time="2024-10-14T15:44:44Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:44 minikube cri-dockerd[221]: time="2024-10-14T15:44:44Z" level=info msg="Start docker client with request timeout 0s"
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:44 minikube cri-dockerd[221]: time="2024-10-14T15:44:44Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:45 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:45 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:45 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:47 minikube cri-dockerd[413]: time="2024-10-14T15:44:47Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:47 minikube cri-dockerd[413]: time="2024-10-14T15:44:47Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:47 minikube cri-dockerd[413]: time="2024-10-14T15:44:47Z" level=info msg="Start docker client with request timeout 0s"
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:47 minikube cri-dockerd[413]: time="2024-10-14T15:44:47Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:49 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:49 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:49 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:50 minikube cri-dockerd[421]: time="2024-10-14T15:44:50Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:50 minikube cri-dockerd[421]: time="2024-10-14T15:44:50Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:50 minikube cri-dockerd[421]: time="2024-10-14T15:44:50Z" level=info msg="Start docker client with request timeout 0s"
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:50 minikube cri-dockerd[421]: time="2024-10-14T15:44:50Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:50 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:50 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:50 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:52 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:52 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:52 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:52 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:52 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:45:33 multinode-671000 systemd[1]: Starting Docker Application Container Engine...
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:45:33 multinode-671000 dockerd[649]: time="2024-10-14T15:45:33.956984837Z" level=info msg="Starting up"
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:45:33 multinode-671000 dockerd[649]: time="2024-10-14T15:45:33.957924243Z" level=info msg="containerd not running, starting managed containerd"
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:45:33 multinode-671000 dockerd[649]: time="2024-10-14T15:45:33.959335951Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=655
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:45:33 multinode-671000 dockerd[655]: time="2024-10-14T15:45:33.994773864Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.020772213Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.021015015Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.021095615Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.021147816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.021828519Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.021976120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.022248222Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.022376622Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.022401523Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.022414623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.023030126Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.023715230Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.027058949Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.027212250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.027346050Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.027434351Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.028070055Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.028254556Z" level=info msg="metadata content store policy set" policy=shared
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.033722086Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.033900187Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.033927888Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.033944088Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.033959488Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.034029088Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.034638992Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.034898493Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.034993394Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035025394Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035042394Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035056394Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035070894Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035091294Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035125794Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035139394Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035152195Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035200795Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035227495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035242395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035255095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035268595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035283595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035296895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035314495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035330096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035343596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035364096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035376796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035388896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035401196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035419096Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035441896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035454496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035465896Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035512897Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035554297Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035568497Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035580597Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035590797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035604297Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035619397Z" level=info msg="NRI interface is disabled by configuration."
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035934999Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.036229901Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.036295501Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.036322201Z" level=info msg="containerd successfully booted in 0.043787s"
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.016752326Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.204043816Z" level=info msg="Loading containers: start."
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.545951324Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.688138626Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.780023455Z" level=info msg="Loading containers: done."
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.809569125Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.809610125Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.809633825Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.810490930Z" level=info msg="Daemon has completed initialization"
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.853736479Z" level=info msg="API listen on /var/run/docker.sock"
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.854139881Z" level=info msg="API listen on [::]:2376"
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 systemd[1]: Started Docker Application Container Engine.
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 systemd[1]: Stopping Docker Application Container Engine...
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 dockerd[649]: time="2024-10-14T15:46:01.049459779Z" level=info msg="Processing signal 'terminated'"
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 dockerd[649]: time="2024-10-14T15:46:01.053392981Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 dockerd[649]: time="2024-10-14T15:46:01.053568081Z" level=info msg="Daemon shutdown complete"
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 dockerd[649]: time="2024-10-14T15:46:01.053889681Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 dockerd[649]: time="2024-10-14T15:46:01.054172781Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 systemd[1]: docker.service: Deactivated successfully.
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 systemd[1]: Stopped Docker Application Container Engine.
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 systemd[1]: Starting Docker Application Container Engine...
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:02.109177376Z" level=info msg="Starting up"
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:02.110667577Z" level=info msg="containerd not running, starting managed containerd"
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:02.112008177Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1093
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.143199292Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168149004Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168197704Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168231304Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168244704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168266504Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168317904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168445004Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168531404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168550204Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168561104Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168583904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168690904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.171907506Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172002906Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172175606Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172377606Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172424606Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172461506Z" level=info msg="metadata content store policy set" policy=shared
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172795106Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172882406Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172902406Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172916306Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172930506Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172992206Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173380806Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173626906Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173734806Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173758306Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173794906Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173849506Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173864606Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173878206Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173900507Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173916207Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173928607Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173940507Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173959407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173973007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173985207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173998307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174010307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174023407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174035407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174047207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174077107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174095807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174107607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174191507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174206607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174229207Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174259307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174352207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174370407Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174499607Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174541907Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174556007Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174568207Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174578207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174598407Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174612107Z" level=info msg="NRI interface is disabled by configuration."
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174893107Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.175192307Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.175271607Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.175364007Z" level=info msg="containerd successfully booted in 0.032943s"
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.157176768Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.188626383Z" level=info msg="Loading containers: start."
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.419822091Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.533275144Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.631380390Z" level=info msg="Loading containers: done."
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.656005002Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.656245502Z" level=info msg="Daemon has completed initialization"
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.695426820Z" level=info msg="API listen on /var/run/docker.sock"
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 systemd[1]: Started Docker Application Container Engine.
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.695638120Z" level=info msg="API listen on [::]:2376"
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Start docker client with request timeout 0s"
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Loaded network plugin cni"
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Setting cgroupDriver cgroupfs"
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Start cri-dockerd grpc backend"
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:09Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7c65d6cfc9-fs9ct_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"2f8cc9a218fef4446839b0253827fc2dd89f8eccbd66ab7f0e5123c033f0aacc\""
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:09Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-7dff88458-vlp7j_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"06e529266db4b2cf3ad09285cddeb89d7d11e858b5156cf73f41198c9500b9f2\""
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.635579177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.635817077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.635919877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.636083677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.762883836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.763092036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.763114536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.765440937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1d3033f871fb11cb3095bcf5c5d43615de9685372a45edf226fe52b2f482bc71/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.846488476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.847106376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.847254676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.854373579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.883112593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.884477393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.884514293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.884605993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6155e8be2d5d725a4259a45fe10f7ceb3fc581d528a6486633b563a59f331127/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7bd4c36606eefc91e9ae07ea5683536fc78fdb6f7f752f44d28787b88540a878/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.061102976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.061201476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.061221876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.061393176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0697a11790e806fa4d679e3d97be4c7193692d1cb8f76d882cf3a75aa8e0c238/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.312465294Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.312610794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.312646494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.312762794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.422697746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.422797746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.422816346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.423001046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.500801282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.501016583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.501037383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.504117984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:15Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.472267615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.472571215Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.472597215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.472873315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.475833517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.476013017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.476107917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.476358717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.515050835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.515249635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.515393835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.515565835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6f8bdf552734e59df94d489a081585cdaeeaee729f0a0ae92105117d4d744f3f/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cdcdd532ba1369fe0e866b321f18e0bd99a88a2699c325eea17a070d48f2f024/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.911588321Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.913177522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.913368522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.914060722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7bcadf1f0885ff3411b477a64c6af366c77eb264ce7a761f174b079b11e2e703/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.063841193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.063929693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.063946093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.064242693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.206735160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.207544260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.208633061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.224429668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:47.556424473Z" level=info msg="ignoring event" container=c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:47.559859508Z" level=info msg="shim disconnected" id=c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6 namespace=moby
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:47.560270512Z" level=warning msg="cleaning up after shim disconnected" id=c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6 namespace=moby
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:47.560505714Z" level=info msg="cleaning up dead shim" namespace=moby
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:03 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:03.070959923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:03 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:03.071176624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:03 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:03.071240924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:03 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:03.071756926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.071716036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.071943436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.071968036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.072116937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:47:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/429b989a1a986d23a2e5aee0de1aef1e683a014bebb587981622bd80a3ac5221/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.295865797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.295993998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.296019698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.296117898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:47:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9092d17516eb35243fd461a360605e738727838ee50f870f3bd6c290fd061d20/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.536751498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.537062099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.537100499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.537246499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.821494873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.821592273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.821611273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.821730874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.844459   15224 logs.go:123] Gathering logs for dmesg ...
	I1014 08:47:31.844459   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 08:47:31.867551   15224 command_runner.go:130] > [Oct14 15:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I1014 08:47:31.867648   15224 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I1014 08:47:31.867648   15224 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I1014 08:47:31.867648   15224 command_runner.go:130] > [  +0.121183] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I1014 08:47:31.867648   15224 command_runner.go:130] > [  +0.024192] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I1014 08:47:31.867648   15224 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I1014 08:47:31.867749   15224 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I1014 08:47:31.867749   15224 command_runner.go:130] > [  +0.058588] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I1014 08:47:31.867816   15224 command_runner.go:130] > [  +0.021951] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I1014 08:47:31.867858   15224 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I1014 08:47:31.867858   15224 command_runner.go:130] > [  +5.764502] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I1014 08:47:31.867921   15224 command_runner.go:130] > [  +0.701221] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I1014 08:47:31.867921   15224 command_runner.go:130] > [  +1.823727] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I1014 08:47:31.867955   15224 command_runner.go:130] > [  +7.351082] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I1014 08:47:31.867955   15224 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I1014 08:47:31.867955   15224 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I1014 08:47:31.867955   15224 command_runner.go:130] > [Oct14 15:45] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	I1014 08:47:31.867955   15224 command_runner.go:130] > [  +0.175163] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	I1014 08:47:31.868054   15224 command_runner.go:130] > [ +26.061812] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	I1014 08:47:31.868079   15224 command_runner.go:130] > [  +0.098944] kauditd_printk_skb: 71 callbacks suppressed
	I1014 08:47:31.868079   15224 command_runner.go:130] > [  +0.531295] systemd-fstab-generator[1053]: Ignoring "noauto" option for root device
	I1014 08:47:31.868079   15224 command_runner.go:130] > [Oct14 15:46] systemd-fstab-generator[1065]: Ignoring "noauto" option for root device
	I1014 08:47:31.868079   15224 command_runner.go:130] > [  +0.229472] systemd-fstab-generator[1079]: Ignoring "noauto" option for root device
	I1014 08:47:31.868079   15224 command_runner.go:130] > [  +2.943333] systemd-fstab-generator[1310]: Ignoring "noauto" option for root device
	I1014 08:47:31.868079   15224 command_runner.go:130] > [  +0.192845] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	I1014 08:47:31.868079   15224 command_runner.go:130] > [  +0.209914] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	I1014 08:47:31.868079   15224 command_runner.go:130] > [  +0.290916] systemd-fstab-generator[1348]: Ignoring "noauto" option for root device
	I1014 08:47:31.868079   15224 command_runner.go:130] > [  +0.928050] systemd-fstab-generator[1473]: Ignoring "noauto" option for root device
	I1014 08:47:31.868079   15224 command_runner.go:130] > [  +0.103044] kauditd_printk_skb: 202 callbacks suppressed
	I1014 08:47:31.868079   15224 command_runner.go:130] > [  +3.884891] systemd-fstab-generator[1614]: Ignoring "noauto" option for root device
	I1014 08:47:31.868079   15224 command_runner.go:130] > [  +1.232270] kauditd_printk_skb: 44 callbacks suppressed
	I1014 08:47:31.868079   15224 command_runner.go:130] > [  +5.880292] kauditd_printk_skb: 30 callbacks suppressed
	I1014 08:47:31.868079   15224 command_runner.go:130] > [  +4.216972] systemd-fstab-generator[2460]: Ignoring "noauto" option for root device
	I1014 08:47:31.868079   15224 command_runner.go:130] > [ +15.813728] kauditd_printk_skb: 72 callbacks suppressed
	I1014 08:47:31.869550   15224 logs.go:123] Gathering logs for describe nodes ...
	I1014 08:47:31.870127   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 08:47:32.080960   15224 command_runner.go:130] > Name:               multinode-671000
	I1014 08:47:32.080960   15224 command_runner.go:130] > Roles:              control-plane
	I1014 08:47:32.080960   15224 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I1014 08:47:32.080960   15224 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I1014 08:47:32.080960   15224 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I1014 08:47:32.080960   15224 command_runner.go:130] >                     kubernetes.io/hostname=multinode-671000
	I1014 08:47:32.080960   15224 command_runner.go:130] >                     kubernetes.io/os=linux
	I1014 08:47:32.080960   15224 command_runner.go:130] >                     minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	I1014 08:47:32.080960   15224 command_runner.go:130] >                     minikube.k8s.io/name=multinode-671000
	I1014 08:47:32.080960   15224 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I1014 08:47:32.080960   15224 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_10_14T08_22_41_0700
	I1014 08:47:32.080960   15224 command_runner.go:130] >                     minikube.k8s.io/version=v1.34.0
	I1014 08:47:32.080960   15224 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I1014 08:47:32.080960   15224 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I1014 08:47:32.080960   15224 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I1014 08:47:32.080960   15224 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I1014 08:47:32.080960   15224 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I1014 08:47:32.080960   15224 command_runner.go:130] > CreationTimestamp:  Mon, 14 Oct 2024 15:22:36 +0000
	I1014 08:47:32.080960   15224 command_runner.go:130] > Taints:             <none>
	I1014 08:47:32.080960   15224 command_runner.go:130] > Unschedulable:      false
	I1014 08:47:32.080960   15224 command_runner.go:130] > Lease:
	I1014 08:47:32.080960   15224 command_runner.go:130] >   HolderIdentity:  multinode-671000
	I1014 08:47:32.080960   15224 command_runner.go:130] >   AcquireTime:     <unset>
	I1014 08:47:32.080960   15224 command_runner.go:130] >   RenewTime:       Mon, 14 Oct 2024 15:47:26 +0000
	I1014 08:47:32.080960   15224 command_runner.go:130] > Conditions:
	I1014 08:47:32.080960   15224 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I1014 08:47:32.080960   15224 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I1014 08:47:32.080960   15224 command_runner.go:130] >   MemoryPressure   False   Mon, 14 Oct 2024 15:46:59 +0000   Mon, 14 Oct 2024 15:22:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I1014 08:47:32.080960   15224 command_runner.go:130] >   DiskPressure     False   Mon, 14 Oct 2024 15:46:59 +0000   Mon, 14 Oct 2024 15:22:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I1014 08:47:32.080960   15224 command_runner.go:130] >   PIDPressure      False   Mon, 14 Oct 2024 15:46:59 +0000   Mon, 14 Oct 2024 15:22:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I1014 08:47:32.080960   15224 command_runner.go:130] >   Ready            True    Mon, 14 Oct 2024 15:46:59 +0000   Mon, 14 Oct 2024 15:46:59 +0000   KubeletReady                 kubelet is posting ready status
	I1014 08:47:32.080960   15224 command_runner.go:130] > Addresses:
	I1014 08:47:32.080960   15224 command_runner.go:130] >   InternalIP:  172.20.106.123
	I1014 08:47:32.080960   15224 command_runner.go:130] >   Hostname:    multinode-671000
	I1014 08:47:32.080960   15224 command_runner.go:130] > Capacity:
	I1014 08:47:32.080960   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:32.080960   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:32.080960   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:32.080960   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:32.080960   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:32.080960   15224 command_runner.go:130] > Allocatable:
	I1014 08:47:32.080960   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:32.080960   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:32.080960   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:32.080960   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:32.080960   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:32.080960   15224 command_runner.go:130] > System Info:
	I1014 08:47:32.080960   15224 command_runner.go:130] >   Machine ID:                 fc389f3b9e2846b4b909cfc8e7984541
	I1014 08:47:32.081981   15224 command_runner.go:130] >   System UUID:                f72bc210-2ad8-1e4f-ad7d-3e75159c3d98
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Boot ID:                    98d09a99-1eff-402d-837f-6cacdc4463d7
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Kernel Version:             5.10.207
	I1014 08:47:32.081981   15224 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Operating System:           linux
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Architecture:               amd64
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Container Runtime Version:  docker://27.3.1
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Kubelet Version:            v1.31.1
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Kube-Proxy Version:         v1.31.1
	I1014 08:47:32.081981   15224 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I1014 08:47:32.081981   15224 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I1014 08:47:32.081981   15224 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I1014 08:47:32.081981   15224 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I1014 08:47:32.081981   15224 command_runner.go:130] >   default                     busybox-7dff88458-vlp7j                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I1014 08:47:32.081981   15224 command_runner.go:130] >   kube-system                 coredns-7c65d6cfc9-fs9ct                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     24m
	I1014 08:47:32.081981   15224 command_runner.go:130] >   kube-system                 etcd-multinode-671000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         75s
	I1014 08:47:32.081981   15224 command_runner.go:130] >   kube-system                 kindnet-wqrx6                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	I1014 08:47:32.081981   15224 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-671000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         77s
	I1014 08:47:32.081981   15224 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-671000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I1014 08:47:32.081981   15224 command_runner.go:130] >   kube-system                 kube-proxy-r74dx                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I1014 08:47:32.081981   15224 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-671000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I1014 08:47:32.081981   15224 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I1014 08:47:32.081981   15224 command_runner.go:130] > Allocated resources:
	I1014 08:47:32.081981   15224 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Resource           Requests     Limits
	I1014 08:47:32.081981   15224 command_runner.go:130] >   --------           --------     ------
	I1014 08:47:32.081981   15224 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I1014 08:47:32.081981   15224 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I1014 08:47:32.081981   15224 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I1014 08:47:32.081981   15224 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I1014 08:47:32.081981   15224 command_runner.go:130] > Events:
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I1014 08:47:32.081981   15224 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  Starting                 24m                kube-proxy       
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  Starting                 73s                kube-proxy       
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  Starting                 25m                kubelet          Starting kubelet.
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  25m (x8 over 25m)  kubelet          Node multinode-671000 status is now: NodeHasSufficientMemory
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    25m (x8 over 25m)  kubelet          Node multinode-671000 status is now: NodeHasNoDiskPressure
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     25m (x7 over 25m)  kubelet          Node multinode-671000 status is now: NodeHasSufficientPID
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-671000 status is now: NodeHasNoDiskPressure
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-671000 status is now: NodeHasSufficientMemory
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-671000 status is now: NodeHasSufficientPID
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  RegisteredNode           24m                node-controller  Node multinode-671000 event: Registered Node multinode-671000 in Controller
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  NodeReady                24m                kubelet          Node multinode-671000 status is now: NodeReady
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  Starting                 83s                kubelet          Starting kubelet.
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  82s (x8 over 83s)  kubelet          Node multinode-671000 status is now: NodeHasSufficientMemory
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    82s (x8 over 83s)  kubelet          Node multinode-671000 status is now: NodeHasNoDiskPressure
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     82s (x7 over 83s)  kubelet          Node multinode-671000 status is now: NodeHasSufficientPID
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  RegisteredNode           74s                node-controller  Node multinode-671000 event: Registered Node multinode-671000 in Controller
	I1014 08:47:32.081981   15224 command_runner.go:130] > Name:               multinode-671000-m02
	I1014 08:47:32.081981   15224 command_runner.go:130] > Roles:              <none>
	I1014 08:47:32.081981   15224 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I1014 08:47:32.081981   15224 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I1014 08:47:32.081981   15224 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I1014 08:47:32.081981   15224 command_runner.go:130] >                     kubernetes.io/hostname=multinode-671000-m02
	I1014 08:47:32.081981   15224 command_runner.go:130] >                     kubernetes.io/os=linux
	I1014 08:47:32.081981   15224 command_runner.go:130] >                     minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	I1014 08:47:32.081981   15224 command_runner.go:130] >                     minikube.k8s.io/name=multinode-671000
	I1014 08:47:32.081981   15224 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I1014 08:47:32.081981   15224 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_10_14T08_25_50_0700
	I1014 08:47:32.081981   15224 command_runner.go:130] >                     minikube.k8s.io/version=v1.34.0
	I1014 08:47:32.082955   15224 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I1014 08:47:32.082955   15224 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I1014 08:47:32.082955   15224 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I1014 08:47:32.082955   15224 command_runner.go:130] > CreationTimestamp:  Mon, 14 Oct 2024 15:25:49 +0000
	I1014 08:47:32.082955   15224 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I1014 08:47:32.082955   15224 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I1014 08:47:32.082955   15224 command_runner.go:130] > Unschedulable:      false
	I1014 08:47:32.082955   15224 command_runner.go:130] > Lease:
	I1014 08:47:32.082955   15224 command_runner.go:130] >   HolderIdentity:  multinode-671000-m02
	I1014 08:47:32.082955   15224 command_runner.go:130] >   AcquireTime:     <unset>
	I1014 08:47:32.082955   15224 command_runner.go:130] >   RenewTime:       Mon, 14 Oct 2024 15:43:00 +0000
	I1014 08:47:32.082955   15224 command_runner.go:130] > Conditions:
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I1014 08:47:32.082955   15224 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I1014 08:47:32.082955   15224 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 14 Oct 2024 15:42:08 +0000   Mon, 14 Oct 2024 15:43:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:32.082955   15224 command_runner.go:130] >   DiskPressure     Unknown   Mon, 14 Oct 2024 15:42:08 +0000   Mon, 14 Oct 2024 15:43:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:32.082955   15224 command_runner.go:130] >   PIDPressure      Unknown   Mon, 14 Oct 2024 15:42:08 +0000   Mon, 14 Oct 2024 15:43:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Ready            Unknown   Mon, 14 Oct 2024 15:42:08 +0000   Mon, 14 Oct 2024 15:43:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:32.082955   15224 command_runner.go:130] > Addresses:
	I1014 08:47:32.082955   15224 command_runner.go:130] >   InternalIP:  172.20.109.137
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Hostname:    multinode-671000-m02
	I1014 08:47:32.082955   15224 command_runner.go:130] > Capacity:
	I1014 08:47:32.082955   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:32.082955   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:32.082955   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:32.082955   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:32.082955   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:32.082955   15224 command_runner.go:130] > Allocatable:
	I1014 08:47:32.082955   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:32.082955   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:32.082955   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:32.082955   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:32.082955   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:32.082955   15224 command_runner.go:130] > System Info:
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Machine ID:                 d57a3470ff3c4f03abf025a19c5c23d9
	I1014 08:47:32.082955   15224 command_runner.go:130] >   System UUID:                d36e677f-0d87-a94f-af28-d8324326f88f
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Boot ID:                    33649cab-156e-4789-8adf-a6fc060b3de7
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Kernel Version:             5.10.207
	I1014 08:47:32.082955   15224 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Operating System:           linux
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Architecture:               amd64
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Container Runtime Version:  docker://27.3.1
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Kubelet Version:            v1.31.1
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Kube-Proxy Version:         v1.31.1
	I1014 08:47:32.082955   15224 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I1014 08:47:32.082955   15224 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I1014 08:47:32.082955   15224 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I1014 08:47:32.082955   15224 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I1014 08:47:32.082955   15224 command_runner.go:130] >   default                     busybox-7dff88458-bnqj6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I1014 08:47:32.082955   15224 command_runner.go:130] >   kube-system                 kindnet-rgbjf              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	I1014 08:47:32.082955   15224 command_runner.go:130] >   kube-system                 kube-proxy-kbpjf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I1014 08:47:32.082955   15224 command_runner.go:130] > Allocated resources:
	I1014 08:47:32.082955   15224 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Resource           Requests   Limits
	I1014 08:47:32.082955   15224 command_runner.go:130] >   --------           --------   ------
	I1014 08:47:32.082955   15224 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I1014 08:47:32.082955   15224 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I1014 08:47:32.082955   15224 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I1014 08:47:32.082955   15224 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I1014 08:47:32.082955   15224 command_runner.go:130] > Events:
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I1014 08:47:32.082955   15224 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-671000-m02 status is now: NodeHasSufficientMemory
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-671000-m02 status is now: NodeHasNoDiskPressure
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-671000-m02 status is now: NodeHasSufficientPID
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-671000-m02 event: Registered Node multinode-671000-m02 in Controller
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Normal  NodeReady                21m                kubelet          Node multinode-671000-m02 status is now: NodeReady
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Normal  NodeNotReady             3m47s              node-controller  Node multinode-671000-m02 status is now: NodeNotReady
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Normal  RegisteredNode           74s                node-controller  Node multinode-671000-m02 event: Registered Node multinode-671000-m02 in Controller
	I1014 08:47:32.082955   15224 command_runner.go:130] > Name:               multinode-671000-m03
	I1014 08:47:32.082955   15224 command_runner.go:130] > Roles:              <none>
	I1014 08:47:32.082955   15224 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I1014 08:47:32.082955   15224 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I1014 08:47:32.082955   15224 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I1014 08:47:32.082955   15224 command_runner.go:130] >                     kubernetes.io/hostname=multinode-671000-m03
	I1014 08:47:32.082955   15224 command_runner.go:130] >                     kubernetes.io/os=linux
	I1014 08:47:32.082955   15224 command_runner.go:130] >                     minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	I1014 08:47:32.082955   15224 command_runner.go:130] >                     minikube.k8s.io/name=multinode-671000
	I1014 08:47:32.082955   15224 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I1014 08:47:32.082955   15224 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_10_14T08_41_35_0700
	I1014 08:47:32.083951   15224 command_runner.go:130] >                     minikube.k8s.io/version=v1.34.0
	I1014 08:47:32.083951   15224 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I1014 08:47:32.083951   15224 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I1014 08:47:32.083951   15224 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I1014 08:47:32.083951   15224 command_runner.go:130] > CreationTimestamp:  Mon, 14 Oct 2024 15:41:34 +0000
	I1014 08:47:32.083951   15224 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I1014 08:47:32.083951   15224 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I1014 08:47:32.083951   15224 command_runner.go:130] > Unschedulable:      false
	I1014 08:47:32.083951   15224 command_runner.go:130] > Lease:
	I1014 08:47:32.083951   15224 command_runner.go:130] >   HolderIdentity:  multinode-671000-m03
	I1014 08:47:32.083951   15224 command_runner.go:130] >   AcquireTime:     <unset>
	I1014 08:47:32.083951   15224 command_runner.go:130] >   RenewTime:       Mon, 14 Oct 2024 15:42:46 +0000
	I1014 08:47:32.083951   15224 command_runner.go:130] > Conditions:
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I1014 08:47:32.083951   15224 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I1014 08:47:32.083951   15224 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 14 Oct 2024 15:41:53 +0000   Mon, 14 Oct 2024 15:43:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:32.083951   15224 command_runner.go:130] >   DiskPressure     Unknown   Mon, 14 Oct 2024 15:41:53 +0000   Mon, 14 Oct 2024 15:43:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:32.083951   15224 command_runner.go:130] >   PIDPressure      Unknown   Mon, 14 Oct 2024 15:41:53 +0000   Mon, 14 Oct 2024 15:43:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Ready            Unknown   Mon, 14 Oct 2024 15:41:53 +0000   Mon, 14 Oct 2024 15:43:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:32.083951   15224 command_runner.go:130] > Addresses:
	I1014 08:47:32.083951   15224 command_runner.go:130] >   InternalIP:  172.20.102.29
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Hostname:    multinode-671000-m03
	I1014 08:47:32.083951   15224 command_runner.go:130] > Capacity:
	I1014 08:47:32.083951   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:32.083951   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:32.083951   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:32.083951   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:32.083951   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:32.083951   15224 command_runner.go:130] > Allocatable:
	I1014 08:47:32.083951   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:32.083951   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:32.083951   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:32.083951   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:32.083951   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:32.083951   15224 command_runner.go:130] > System Info:
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Machine ID:                 6da8cf5e96c04d55b9129d0893534bf2
	I1014 08:47:32.083951   15224 command_runner.go:130] >   System UUID:                49616488-815a-3f43-8f47-13dbf29b6ca7
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Boot ID:                    d9fe58fb-ac8e-4430-9563-1b3e9fd35ffd
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Kernel Version:             5.10.207
	I1014 08:47:32.083951   15224 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Operating System:           linux
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Architecture:               amd64
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Container Runtime Version:  docker://27.3.1
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Kubelet Version:            v1.31.1
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Kube-Proxy Version:         v1.31.1
	I1014 08:47:32.083951   15224 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I1014 08:47:32.083951   15224 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I1014 08:47:32.083951   15224 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I1014 08:47:32.083951   15224 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I1014 08:47:32.083951   15224 command_runner.go:130] >   kube-system                 kindnet-5rqxq       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I1014 08:47:32.083951   15224 command_runner.go:130] >   kube-system                 kube-proxy-n6txs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I1014 08:47:32.083951   15224 command_runner.go:130] > Allocated resources:
	I1014 08:47:32.083951   15224 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Resource           Requests   Limits
	I1014 08:47:32.083951   15224 command_runner.go:130] >   --------           --------   ------
	I1014 08:47:32.083951   15224 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I1014 08:47:32.083951   15224 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I1014 08:47:32.083951   15224 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I1014 08:47:32.083951   15224 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I1014 08:47:32.083951   15224 command_runner.go:130] > Events:
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I1014 08:47:32.083951   15224 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  Starting                 5m53s                  kube-proxy       
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-671000-m03 status is now: NodeHasSufficientMemory
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-671000-m03 status is now: NodeHasNoDiskPressure
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-671000-m03 status is now: NodeHasSufficientPID
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-671000-m03 status is now: NodeReady
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  Starting                 5m58s                  kubelet          Starting kubelet.
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m58s (x2 over 5m58s)  kubelet          Node multinode-671000-m03 status is now: NodeHasSufficientMemory
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m58s (x2 over 5m58s)  kubelet          Node multinode-671000-m03 status is now: NodeHasNoDiskPressure
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m58s (x2 over 5m58s)  kubelet          Node multinode-671000-m03 status is now: NodeHasSufficientPID
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m58s                  kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  RegisteredNode           5m53s                  node-controller  Node multinode-671000-m03 event: Registered Node multinode-671000-m03 in Controller
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  NodeReady                5m39s                  kubelet          Node multinode-671000-m03 status is now: NodeReady
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  NodeNotReady             4m3s                   node-controller  Node multinode-671000-m03 status is now: NodeNotReady
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  RegisteredNode           74s                    node-controller  Node multinode-671000-m03 event: Registered Node multinode-671000-m03 in Controller
	I1014 08:47:32.093961   15224 logs.go:123] Gathering logs for kube-scheduler [661e75bbf6b4] ...
	I1014 08:47:32.093961   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 661e75bbf6b4"
	I1014 08:47:32.122585   15224 command_runner.go:130] ! I1014 15:22:34.688194       1 serving.go:386] Generated self-signed cert in-memory
	I1014 08:47:32.122585   15224 command_runner.go:130] ! W1014 15:22:36.199586       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I1014 08:47:32.122585   15224 command_runner.go:130] ! W1014 15:22:36.199661       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I1014 08:47:32.122585   15224 command_runner.go:130] ! W1014 15:22:36.199675       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	I1014 08:47:32.122585   15224 command_runner.go:130] ! W1014 15:22:36.199681       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 08:47:32.122585   15224 command_runner.go:130] ! I1014 15:22:36.288536       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1014 08:47:32.122585   15224 command_runner.go:130] ! I1014 15:22:36.288649       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:32.122585   15224 command_runner.go:130] ! I1014 15:22:36.292628       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1014 08:47:32.122585   15224 command_runner.go:130] ! I1014 15:22:36.292942       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 08:47:32.122585   15224 command_runner.go:130] ! I1014 15:22:36.293038       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 08:47:32.122585   15224 command_runner.go:130] ! I1014 15:22:36.293102       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 08:47:32.122585   15224 command_runner.go:130] ! W1014 15:22:36.298034       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1014 08:47:32.122585   15224 command_runner.go:130] ! E1014 15:22:36.298090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.122585   15224 command_runner.go:130] ! W1014 15:22:36.298377       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:32.122585   15224 command_runner.go:130] ! E1014 15:22:36.298420       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.122585   15224 command_runner.go:130] ! W1014 15:22:36.298587       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I1014 08:47:32.122585   15224 command_runner.go:130] ! E1014 15:22:36.298642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.123111   15224 command_runner.go:130] ! W1014 15:22:36.298730       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1014 08:47:32.123111   15224 command_runner.go:130] ! E1014 15:22:36.298855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.123111   15224 command_runner.go:130] ! W1014 15:22:36.299272       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1014 08:47:32.123111   15224 command_runner.go:130] ! E1014 15:22:36.299314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.123297   15224 command_runner.go:130] ! W1014 15:22:36.299416       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1014 08:47:32.123408   15224 command_runner.go:130] ! E1014 15:22:36.299618       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.123408   15224 command_runner.go:130] ! W1014 15:22:36.299693       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I1014 08:47:32.123541   15224 command_runner.go:130] ! E1014 15:22:36.299710       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.123594   15224 command_runner.go:130] ! W1014 15:22:36.299857       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1014 08:47:32.123594   15224 command_runner.go:130] ! E1014 15:22:36.299920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.123594   15224 command_runner.go:130] ! W1014 15:22:36.302822       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1014 08:47:32.123686   15224 command_runner.go:130] ! E1014 15:22:36.303096       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1014 08:47:32.123767   15224 command_runner.go:130] ! W1014 15:22:36.303242       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1014 08:47:32.123830   15224 command_runner.go:130] ! E1014 15:22:36.303288       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.123830   15224 command_runner.go:130] ! W1014 15:22:36.303391       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1014 08:47:32.123830   15224 command_runner.go:130] ! E1014 15:22:36.303426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.123830   15224 command_runner.go:130] ! W1014 15:22:36.303605       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:32.123830   15224 command_runner.go:130] ! E1014 15:22:36.303643       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.123830   15224 command_runner.go:130] ! W1014 15:22:36.303739       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:32.123830   15224 command_runner.go:130] ! E1014 15:22:36.303771       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.123830   15224 command_runner.go:130] ! W1014 15:22:36.303825       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1014 08:47:32.123830   15224 command_runner.go:130] ! E1014 15:22:36.303860       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.123830   15224 command_runner.go:130] ! W1014 15:22:36.304041       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:32.123830   15224 command_runner.go:130] ! E1014 15:22:36.304079       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.123830   15224 command_runner.go:130] ! W1014 15:22:37.145637       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:32.124371   15224 command_runner.go:130] ! E1014 15:22:37.146051       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.124371   15224 command_runner.go:130] ! W1014 15:22:37.146415       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1014 08:47:32.124467   15224 command_runner.go:130] ! E1014 15:22:37.146705       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.124498   15224 command_runner.go:130] ! W1014 15:22:37.189116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1014 08:47:32.124581   15224 command_runner.go:130] ! E1014 15:22:37.189252       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.124660   15224 command_runner.go:130] ! W1014 15:22:37.205810       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:32.124727   15224 command_runner.go:130] ! E1014 15:22:37.206152       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.124837   15224 command_runner.go:130] ! W1014 15:22:37.269786       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1014 08:47:32.124884   15224 command_runner.go:130] ! E1014 15:22:37.269856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.124884   15224 command_runner.go:130] ! W1014 15:22:37.270258       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:32.124884   15224 command_runner.go:130] ! E1014 15:22:37.270286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.124884   15224 command_runner.go:130] ! W1014 15:22:37.290857       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1014 08:47:32.124884   15224 command_runner.go:130] ! E1014 15:22:37.290997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.124884   15224 command_runner.go:130] ! W1014 15:22:37.304519       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1014 08:47:32.124884   15224 command_runner.go:130] ! E1014 15:22:37.305020       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1014 08:47:32.124884   15224 command_runner.go:130] ! W1014 15:22:37.328746       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I1014 08:47:32.124884   15224 command_runner.go:130] ! E1014 15:22:37.329049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.124884   15224 command_runner.go:130] ! W1014 15:22:37.338059       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I1014 08:47:32.124884   15224 command_runner.go:130] ! E1014 15:22:37.338269       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.124884   15224 command_runner.go:130] ! W1014 15:22:37.394728       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1014 08:47:32.124884   15224 command_runner.go:130] ! E1014 15:22:37.394803       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.124884   15224 command_runner.go:130] ! W1014 15:22:37.445455       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1014 08:47:32.125416   15224 command_runner.go:130] ! E1014 15:22:37.445612       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.125416   15224 command_runner.go:130] ! W1014 15:22:37.490285       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1014 08:47:32.125509   15224 command_runner.go:130] ! E1014 15:22:37.490817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.125555   15224 command_runner.go:130] ! W1014 15:22:37.537304       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1014 08:47:32.125585   15224 command_runner.go:130] ! E1014 15:22:37.537396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.125585   15224 command_runner.go:130] ! W1014 15:22:37.713713       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:32.125585   15224 command_runner.go:130] ! E1014 15:22:37.713777       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.125585   15224 command_runner.go:130] ! I1014 15:22:39.593596       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 08:47:32.125585   15224 command_runner.go:130] ! I1014 15:43:46.388691       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1014 08:47:32.125585   15224 command_runner.go:130] ! I1014 15:43:46.388783       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1014 08:47:32.125585   15224 command_runner.go:130] ! I1014 15:43:46.389141       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 08:47:32.125585   15224 command_runner.go:130] ! E1014 15:43:46.389549       1 run.go:72] "command failed" err="finished without leader elect"
	I1014 08:47:32.138622   15224 logs.go:123] Gathering logs for etcd [48c8492e231e] ...
	I1014 08:47:32.138622   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48c8492e231e"
	I1014 08:47:32.168336   15224 command_runner.go:130] ! {"level":"warn","ts":"2024-10-14T15:46:11.845953Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1014 08:47:32.168336   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.848739Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.20.106.123:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.20.106.123:2380","--initial-cluster=multinode-671000=https://172.20.106.123:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.20.106.123:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.20.106.123:2380","--name=multinode-671000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I1014 08:47:32.168336   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.848857Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I1014 08:47:32.168336   15224 command_runner.go:130] ! {"level":"warn","ts":"2024-10-14T15:46:11.848886Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1014 08:47:32.168336   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.848900Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://172.20.106.123:2380"]}
	I1014 08:47:32.168336   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.848962Z","caller":"embed/etcd.go:496","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.854418Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.20.106.123:2379"]}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.857036Z","caller":"embed/etcd.go:310","msg":"starting an etcd server","etcd-version":"3.5.15","git-sha":"9a5533382","go-version":"go1.21.12","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-671000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.20.106.123:2380"],"listen-peer-urls":["https://172.20.106.123:2380"],"advertise-client-urls":["https://172.20.106.123:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.20.106.123:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"in
itial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.899392Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"40.66952ms"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.949173Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.984197Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"2dcbff584edb18cc","local-member-id":"782c48cbdf98397b","commit-index":2088}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.985089Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b switched to configuration voters=()"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.987739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b became follower at term 2"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.987772Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 782c48cbdf98397b [peers: [], term: 2, commit: 2088, applied: 0, lastindex: 2088, lastterm: 2]"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"warn","ts":"2024-10-14T15:46:12.003567Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.010981Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1396}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.025362Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":1813}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.035174Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.045608Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"782c48cbdf98397b","timeout":"7s"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.046705Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"782c48cbdf98397b"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.046807Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"782c48cbdf98397b","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.047198Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.047977Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.048044Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.048058Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.048736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b switched to configuration voters=(8659376223993477499)"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.049262Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2dcbff584edb18cc","local-member-id":"782c48cbdf98397b","added-peer-id":"782c48cbdf98397b","added-peer-peer-urls":["https://172.20.100.167:2380"]}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.049694Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2dcbff584edb18cc","local-member-id":"782c48cbdf98397b","cluster-version":"3.5"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.049815Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.056204Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.062166Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.062574Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"782c48cbdf98397b","initial-advertise-peer-urls":["https://172.20.106.123:2380"],"listen-peer-urls":["https://172.20.106.123:2380"],"advertise-client-urls":["https://172.20.106.123:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.20.106.123:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.062654Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.062749Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"172.20.106.123:2380"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.062764Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"172.20.106.123:2380"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b is starting a new election at term 2"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b became pre-candidate at term 2"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b received MsgPreVoteResp from 782c48cbdf98397b at term 2"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b became candidate at term 3"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b received MsgVoteResp from 782c48cbdf98397b at term 3"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b became leader at term 3"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 782c48cbdf98397b elected leader 782c48cbdf98397b at term 3"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.496949Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.496902Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"782c48cbdf98397b","local-member-attributes":"{Name:multinode-671000 ClientURLs:[https://172.20.106.123:2379]}","request-path":"/0/members/782c48cbdf98397b/attributes","cluster-id":"2dcbff584edb18cc","publish-timeout":"7s"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.497822Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.499586Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.499631Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.500815Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.502392Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.20.106.123:2379"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.503879Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1014 08:47:32.170315   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.505686Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I1014 08:47:32.176308   15224 logs.go:123] Gathering logs for kube-proxy [ea19428d7036] ...
	I1014 08:47:32.176308   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea19428d7036"
	I1014 08:47:32.205310   15224 command_runner.go:130] ! I1014 15:22:47.466748       1 server_linux.go:66] "Using iptables proxy"
	I1014 08:47:32.205985   15224 command_runner.go:130] ! E1014 15:22:47.511018       1 proxier.go:734] "Error cleaning up nftables rules" err=<
	I1014 08:47:32.205985   15224 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I1014 08:47:32.205985   15224 command_runner.go:130] ! 	add table ip kube-proxy
	I1014 08:47:32.205985   15224 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I1014 08:47:32.206058   15224 command_runner.go:130] !  >
	I1014 08:47:32.206058   15224 command_runner.go:130] ! E1014 15:22:47.546236       1 proxier.go:734] "Error cleaning up nftables rules" err=<
	I1014 08:47:32.206058   15224 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I1014 08:47:32.206058   15224 command_runner.go:130] ! 	add table ip6 kube-proxy
	I1014 08:47:32.206058   15224 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I1014 08:47:32.206058   15224 command_runner.go:130] !  >
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.606437       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.20.100.167"]
	I1014 08:47:32.206058   15224 command_runner.go:130] ! E1014 15:22:47.606505       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.788755       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.788820       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.788852       1 server_linux.go:169] "Using iptables Proxier"
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.807050       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.807424       1 server.go:483] "Version info" version="v1.31.1"
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.807440       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.810361       1 config.go:199] "Starting service config controller"
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.810416       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.810600       1 config.go:105] "Starting endpoint slice config controller"
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.810629       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.811297       1 config.go:328] "Starting node config controller"
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.811309       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.911443       1 shared_informer.go:320] Caches are synced for node config
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.911791       1 shared_informer.go:320] Caches are synced for service config
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.911866       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 08:47:32.209234   15224 logs.go:123] Gathering logs for kube-controller-manager [8af48c446f7e] ...
	I1014 08:47:32.209234   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af48c446f7e"
	I1014 08:47:32.239806   15224 command_runner.go:130] ! I1014 15:46:12.989235       1 serving.go:386] Generated self-signed cert in-memory
	I1014 08:47:32.239806   15224 command_runner.go:130] ! I1014 15:46:13.820617       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I1014 08:47:32.239917   15224 command_runner.go:130] ! I1014 15:46:13.820897       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:32.239917   15224 command_runner.go:130] ! I1014 15:46:13.823101       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1014 08:47:32.239917   15224 command_runner.go:130] ! I1014 15:46:13.823494       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 08:47:32.239917   15224 command_runner.go:130] ! I1014 15:46:13.824132       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1014 08:47:32.239917   15224 command_runner.go:130] ! I1014 15:46:13.824214       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 08:47:32.239917   15224 command_runner.go:130] ! I1014 15:46:17.208145       1 controllermanager.go:797] "Started controller" controller="serviceaccount-token-controller"
	I1014 08:47:32.239917   15224 command_runner.go:130] ! I1014 15:46:17.211496       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I1014 08:47:32.240106   15224 command_runner.go:130] ! I1014 15:46:17.268813       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I1014 08:47:32.240126   15224 command_runner.go:130] ! I1014 15:46:17.269727       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I1014 08:47:32.240215   15224 command_runner.go:130] ! I1014 15:46:17.270864       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I1014 08:47:32.240215   15224 command_runner.go:130] ! I1014 15:46:17.271094       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I1014 08:47:32.240215   15224 command_runner.go:130] ! I1014 15:46:17.271857       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I1014 08:47:32.240287   15224 command_runner.go:130] ! I1014 15:46:17.271962       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I1014 08:47:32.240344   15224 command_runner.go:130] ! I1014 15:46:17.272049       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I1014 08:47:32.240384   15224 command_runner.go:130] ! I1014 15:46:17.272075       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I1014 08:47:32.240384   15224 command_runner.go:130] ! I1014 15:46:17.273540       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I1014 08:47:32.240384   15224 command_runner.go:130] ! I1014 15:46:17.274245       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I1014 08:47:32.240454   15224 command_runner.go:130] ! I1014 15:46:17.274579       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I1014 08:47:32.240479   15224 command_runner.go:130] ! I1014 15:46:17.274747       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.274772       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.275348       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.275380       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.275397       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.275571       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.275603       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! W1014 15:46:17.275618       1 shared_informer.go:597] resyncPeriod 13h32m18.096579392s is smaller than resyncCheckPeriod 20h55m54.648340273s and the informer has already started. Changing it to 20h55m54.648340273s
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.276096       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.276150       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.276197       1 controllermanager.go:797] "Started controller" controller="resourcequota-controller"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.276213       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.276260       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.276359       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.283642       1 controllermanager.go:797] "Started controller" controller="replicaset-controller"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.284697       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.284913       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.288417       1 controllermanager.go:797] "Started controller" controller="cronjob-controller"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.289073       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.289091       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.292212       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-approving-controller"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.292573       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I1014 08:47:32.241070   15224 command_runner.go:130] ! I1014 15:46:17.292591       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I1014 08:47:32.241070   15224 command_runner.go:130] ! I1014 15:46:17.295276       1 controllermanager.go:797] "Started controller" controller="clusterrole-aggregation-controller"
	I1014 08:47:32.241070   15224 command_runner.go:130] ! I1014 15:46:17.295785       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I1014 08:47:32.241070   15224 command_runner.go:130] ! I1014 15:46:17.298756       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I1014 08:47:32.241070   15224 command_runner.go:130] ! I1014 15:46:17.299107       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I1014 08:47:32.241070   15224 command_runner.go:130] ! I1014 15:46:17.299997       1 controllermanager.go:797] "Started controller" controller="endpointslice-mirroring-controller"
	I1014 08:47:32.241070   15224 command_runner.go:130] ! I1014 15:46:17.302040       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I1014 08:47:32.241070   15224 command_runner.go:130] ! I1014 15:46:17.302058       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I1014 08:47:32.241260   15224 command_runner.go:130] ! I1014 15:46:17.305668       1 controllermanager.go:797] "Started controller" controller="replicationcontroller-controller"
	I1014 08:47:32.241311   15224 command_runner.go:130] ! I1014 15:46:17.308801       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I1014 08:47:32.241311   15224 command_runner.go:130] ! I1014 15:46:17.308819       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I1014 08:47:32.241311   15224 command_runner.go:130] ! I1014 15:46:17.318320       1 shared_informer.go:320] Caches are synced for tokens
	I1014 08:47:32.241415   15224 command_runner.go:130] ! I1014 15:46:17.329856       1 controllermanager.go:797] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I1014 08:47:32.241444   15224 command_runner.go:130] ! I1014 15:46:17.330990       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I1014 08:47:32.241477   15224 command_runner.go:130] ! I1014 15:46:17.331395       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I1014 08:47:32.241477   15224 command_runner.go:130] ! I1014 15:46:17.345566       1 controllermanager.go:797] "Started controller" controller="disruption-controller"
	I1014 08:47:32.241562   15224 command_runner.go:130] ! I1014 15:46:17.345806       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I1014 08:47:32.241628   15224 command_runner.go:130] ! I1014 15:46:17.345841       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.345937       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I1014 08:47:32.241658   15224 command_runner.go:130] ! E1014 15:46:17.350088       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.350237       1 controllermanager.go:775] "Warning: skipping controller" controller="service-lb-controller"
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.350277       1 controllermanager.go:775] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.359040       1 controllermanager.go:797] "Started controller" controller="endpoints-controller"
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.360243       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.360265       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.362115       1 controllermanager.go:797] "Started controller" controller="pod-garbage-collector-controller"
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.362235       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.362245       1 shared_informer.go:313] Waiting for caches to sync for GC
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.364537       1 controllermanager.go:797] "Started controller" controller="serviceaccount-controller"
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.364725       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.364738       1 shared_informer.go:313] Waiting for caches to sync for service account
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.367152       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.367373       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.369619       1 controllermanager.go:797] "Started controller" controller="bootstrap-signer-controller"
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.370097       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.373109       1 controllermanager.go:797] "Started controller" controller="token-cleaner-controller"
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.373475       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.373486       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.373493       1 shared_informer.go:320] Caches are synced for token_cleaner
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.375506       1 controllermanager.go:797] "Started controller" controller="ttl-after-finished-controller"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.375684       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.375694       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.379552       1 controllermanager.go:797] "Started controller" controller="ephemeral-volume-controller"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.380063       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.380270       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.413079       1 controllermanager.go:797] "Started controller" controller="namespace-controller"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.413676       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.415689       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.418729       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.418858       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.418983       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.420448       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-signing-controller"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.420573       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.420658       1 controllermanager.go:775] "Warning: skipping controller" controller="node-route-controller"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.420878       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.422022       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.422169       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.422636       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.425521       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.425557       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.425747       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.425569       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:32.243001   15224 command_runner.go:130] ! I1014 15:46:17.425577       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:32.243001   15224 command_runner.go:130] ! E1014 15:46:17.429609       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I1014 08:47:32.243001   15224 command_runner.go:130] ! I1014 15:46:17.429771       1 controllermanager.go:775] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I1014 08:47:32.243001   15224 command_runner.go:130] ! I1014 15:46:17.432720       1 controllermanager.go:797] "Started controller" controller="persistentvolume-attach-detach-controller"
	I1014 08:47:32.243001   15224 command_runner.go:130] ! I1014 15:46:17.433242       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I1014 08:47:32.243134   15224 command_runner.go:130] ! I1014 15:46:17.433509       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I1014 08:47:32.243134   15224 command_runner.go:130] ! I1014 15:46:17.437867       1 controllermanager.go:797] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I1014 08:47:32.243217   15224 command_runner.go:130] ! I1014 15:46:17.438432       1 pvc_protection_controller.go:105] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I1014 08:47:32.243217   15224 command_runner.go:130] ! I1014 15:46:17.438754       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I1014 08:47:32.243308   15224 command_runner.go:130] ! I1014 15:46:17.466996       1 controllermanager.go:797] "Started controller" controller="garbage-collector-controller"
	I1014 08:47:32.243308   15224 command_runner.go:130] ! I1014 15:46:17.467178       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I1014 08:47:32.243308   15224 command_runner.go:130] ! I1014 15:46:17.467191       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1014 08:47:32.243308   15224 command_runner.go:130] ! I1014 15:46:17.467211       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I1014 08:47:32.243377   15224 command_runner.go:130] ! I1014 15:46:17.513974       1 controllermanager.go:797] "Started controller" controller="daemonset-controller"
	I1014 08:47:32.243377   15224 command_runner.go:130] ! I1014 15:46:17.514092       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I1014 08:47:32.243405   15224 command_runner.go:130] ! I1014 15:46:17.514103       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I1014 08:47:32.243453   15224 command_runner.go:130] ! I1014 15:46:17.612272       1 controllermanager.go:797] "Started controller" controller="deployment-controller"
	I1014 08:47:32.243471   15224 command_runner.go:130] ! I1014 15:46:17.612390       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I1014 08:47:32.243499   15224 command_runner.go:130] ! I1014 15:46:17.612405       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I1014 08:47:32.243639   15224 command_runner.go:130] ! I1014 15:46:17.715625       1 controllermanager.go:797] "Started controller" controller="ttl-controller"
	I1014 08:47:32.243672   15224 command_runner.go:130] ! I1014 15:46:17.718491       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I1014 08:47:32.243672   15224 command_runner.go:130] ! I1014 15:46:17.718512       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I1014 08:47:32.243672   15224 command_runner.go:130] ! I1014 15:46:17.762259       1 node_lifecycle_controller.go:430] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I1014 08:47:32.243710   15224 command_runner.go:130] ! I1014 15:46:17.762792       1 controllermanager.go:797] "Started controller" controller="node-lifecycle-controller"
	I1014 08:47:32.243710   15224 command_runner.go:130] ! I1014 15:46:17.763108       1 node_lifecycle_controller.go:464] "Sending events to api server" logger="node-lifecycle-controller"
	I1014 08:47:32.243710   15224 command_runner.go:130] ! I1014 15:46:17.763488       1 node_lifecycle_controller.go:475] "Starting node controller" logger="node-lifecycle-controller"
	I1014 08:47:32.243710   15224 command_runner.go:130] ! I1014 15:46:17.763636       1 shared_informer.go:313] Waiting for caches to sync for taint
	I1014 08:47:32.243710   15224 command_runner.go:130] ! I1014 15:46:17.815269       1 controllermanager.go:797] "Started controller" controller="persistentvolume-expander-controller"
	I1014 08:47:32.243796   15224 command_runner.go:130] ! I1014 15:46:17.815926       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I1014 08:47:32.243796   15224 command_runner.go:130] ! I1014 15:46:17.815820       1 expand_controller.go:328] "Starting expand controller" logger="persistentvolume-expander-controller"
	I1014 08:47:32.243864   15224 command_runner.go:130] ! I1014 15:46:17.815981       1 shared_informer.go:313] Waiting for caches to sync for expand
	I1014 08:47:32.243864   15224 command_runner.go:130] ! I1014 15:46:17.865803       1 controllermanager.go:797] "Started controller" controller="taint-eviction-controller"
	I1014 08:47:32.243892   15224 command_runner.go:130] ! I1014 15:46:17.865833       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I1014 08:47:32.243940   15224 command_runner.go:130] ! I1014 15:46:17.865908       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I1014 08:47:32.244008   15224 command_runner.go:130] ! I1014 15:46:17.865945       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I1014 08:47:32.244008   15224 command_runner.go:130] ! I1014 15:46:17.865986       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I1014 08:47:32.244008   15224 command_runner.go:130] ! I1014 15:46:17.923932       1 controllermanager.go:797] "Started controller" controller="job-controller"
	I1014 08:47:32.244059   15224 command_runner.go:130] ! I1014 15:46:17.924153       1 job_controller.go:226] "Starting job controller" logger="job-controller"
	I1014 08:47:32.244059   15224 command_runner.go:130] ! I1014 15:46:17.924184       1 shared_informer.go:313] Waiting for caches to sync for job
	I1014 08:47:32.244120   15224 command_runner.go:130] ! I1014 15:46:17.978728       1 controllermanager.go:797] "Started controller" controller="root-ca-certificate-publisher-controller"
	I1014 08:47:32.244120   15224 command_runner.go:130] ! I1014 15:46:17.978796       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I1014 08:47:32.244145   15224 command_runner.go:130] ! I1014 15:46:17.978809       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I1014 08:47:32.244145   15224 command_runner.go:130] ! I1014 15:46:18.018003       1 controllermanager.go:797] "Started controller" controller="endpointslice-controller"
	I1014 08:47:32.244145   15224 command_runner.go:130] ! I1014 15:46:18.018177       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I1014 08:47:32.244145   15224 command_runner.go:130] ! I1014 15:46:18.018192       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I1014 08:47:32.244203   15224 command_runner.go:130] ! I1014 15:46:18.077409       1 controllermanager.go:797] "Started controller" controller="statefulset-controller"
	I1014 08:47:32.244203   15224 command_runner.go:130] ! I1014 15:46:18.078007       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I1014 08:47:32.244203   15224 command_runner.go:130] ! I1014 15:46:18.078026       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I1014 08:47:32.244203   15224 command_runner.go:130] ! I1014 15:46:18.245465       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I1014 08:47:32.244282   15224 command_runner.go:130] ! I1014 15:46:18.246368       1 controllermanager.go:797] "Started controller" controller="node-ipam-controller"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.246712       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.246910       1 shared_informer.go:313] Waiting for caches to sync for node
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.264869       1 controllermanager.go:797] "Started controller" controller="persistentvolume-binder-controller"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.264984       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.266232       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.321121       1 controllermanager.go:797] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.323482       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.323903       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.431796       1 controllermanager.go:797] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.431873       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.465851       1 controllermanager.go:797] "Started controller" controller="persistentvolume-protection-controller"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.468767       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.469028       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.485571       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.534720       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.539015       1 shared_informer.go:320] Caches are synced for PVC protection
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.541399       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000\" does not exist"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.541615       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m02\" does not exist"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.549102       1 shared_informer.go:320] Caches are synced for TTL
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.549549       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m03\" does not exist"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.550590       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.551387       1 shared_informer.go:320] Caches are synced for node
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.554673       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.557592       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.558471       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.558669       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.559066       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.559166       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.559144       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.560823       1 shared_informer.go:320] Caches are synced for endpoint
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.563147       1 shared_informer.go:320] Caches are synced for GC
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.566072       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1014 08:47:32.244839   15224 command_runner.go:130] ! I1014 15:46:18.566447       1 shared_informer.go:320] Caches are synced for persistent volume
	I1014 08:47:32.244839   15224 command_runner.go:130] ! I1014 15:46:18.566267       1 shared_informer.go:320] Caches are synced for service account
	I1014 08:47:32.244839   15224 command_runner.go:130] ! I1014 15:46:18.570369       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1014 08:47:32.244839   15224 command_runner.go:130] ! I1014 15:46:18.570522       1 shared_informer.go:320] Caches are synced for PV protection
	I1014 08:47:32.244839   15224 command_runner.go:130] ! I1014 15:46:18.577368       1 shared_informer.go:320] Caches are synced for TTL after finished
	I1014 08:47:32.244839   15224 command_runner.go:130] ! I1014 15:46:18.580187       1 shared_informer.go:320] Caches are synced for crt configmap
	I1014 08:47:32.244839   15224 command_runner.go:130] ! I1014 15:46:18.580534       1 shared_informer.go:320] Caches are synced for ephemeral
	I1014 08:47:32.244839   15224 command_runner.go:130] ! I1014 15:46:18.585372       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1014 08:47:32.244991   15224 command_runner.go:130] ! I1014 15:46:18.593972       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1014 08:47:32.244991   15224 command_runner.go:130] ! I1014 15:46:18.595014       1 shared_informer.go:320] Caches are synced for cronjob
	I1014 08:47:32.244991   15224 command_runner.go:130] ! I1014 15:46:18.600012       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1014 08:47:32.244991   15224 command_runner.go:130] ! I1014 15:46:18.602930       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1014 08:47:32.245069   15224 command_runner.go:130] ! I1014 15:46:18.609680       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1014 08:47:32.245110   15224 command_runner.go:130] ! I1014 15:46:18.613447       1 shared_informer.go:320] Caches are synced for deployment
	I1014 08:47:32.245110   15224 command_runner.go:130] ! I1014 15:46:18.616246       1 shared_informer.go:320] Caches are synced for expand
	I1014 08:47:32.245110   15224 command_runner.go:130] ! I1014 15:46:18.616739       1 shared_informer.go:320] Caches are synced for namespace
	I1014 08:47:32.245110   15224 command_runner.go:130] ! I1014 15:46:18.618534       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1014 08:47:32.245177   15224 command_runner.go:130] ! I1014 15:46:18.625249       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1014 08:47:32.245204   15224 command_runner.go:130] ! I1014 15:46:18.630423       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.632938       1 shared_informer.go:320] Caches are synced for HPA
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.633193       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.634381       1 shared_informer.go:320] Caches are synced for job
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.634623       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.634920       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.649619       1 shared_informer.go:320] Caches are synced for disruption
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.668155       1 shared_informer.go:320] Caches are synced for taint
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.670026       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.680357       1 shared_informer.go:320] Caches are synced for stateful set
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.700582       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.708812       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.714134       1 shared_informer.go:320] Caches are synced for daemon sets
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.718536       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000"
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.718841       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000-m02"
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.719036       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000-m03"
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.721210       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="135.448763ms"
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.721514       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="40.1µs"
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.721809       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="136.173363ms"
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.722033       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.5µs"
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.722234       1 node_lifecycle_controller.go:1036] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.777385       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.786812       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.833914       1 shared_informer.go:320] Caches are synced for attach detach
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:19.252391       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:19.267855       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:19.268119       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1014 08:47:32.245754   15224 command_runner.go:130] ! I1014 15:46:59.871635       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:32.245754   15224 command_runner.go:130] ! I1014 15:46:59.892163       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:32.245754   15224 command_runner.go:130] ! I1014 15:47:03.736416       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1014 08:47:32.245754   15224 command_runner.go:130] ! I1014 15:47:13.821153       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:32.245754   15224 command_runner.go:130] ! I1014 15:47:20.979721       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="114.5µs"
	I1014 08:47:32.245754   15224 command_runner.go:130] ! I1014 15:47:22.061324       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.05527ms"
	I1014 08:47:32.245754   15224 command_runner.go:130] ! I1014 15:47:22.062652       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.8µs"
	I1014 08:47:32.245883   15224 command_runner.go:130] ! I1014 15:47:22.098955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="28.422114ms"
	I1014 08:47:32.245883   15224 command_runner.go:130] ! I1014 15:47:22.099794       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="313.699µs"
	I1014 08:47:32.245936   15224 command_runner.go:130] ! I1014 15:47:23.920002       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:32.261536   15224 logs.go:123] Gathering logs for container status ...
	I1014 08:47:32.261536   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 08:47:32.331202   15224 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I1014 08:47:32.331383   15224 command_runner.go:130] > 1adddc667bd90       8c811b4aec35f                                                                                         12 seconds ago       Running             busybox                   1                   9092d17516eb3       busybox-7dff88458-vlp7j
	I1014 08:47:32.331383   15224 command_runner.go:130] > 5d223e2e64fcd       c69fa2e9cbf5f                                                                                         12 seconds ago       Running             coredns                   1                   429b989a1a986       coredns-7c65d6cfc9-fs9ct
	I1014 08:47:32.331383   15224 command_runner.go:130] > 9d526b02ee41c       6e38f40d628db                                                                                         30 seconds ago       Running             storage-provisioner       2                   cdcdd532ba136       storage-provisioner
	I1014 08:47:32.331481   15224 command_runner.go:130] > bba035362eb97       3a5bc24055c9e                                                                                         About a minute ago   Running             kindnet-cni               1                   7bcadf1f0885f       kindnet-wqrx6
	I1014 08:47:32.331481   15224 command_runner.go:130] > c76c258568107       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   cdcdd532ba136       storage-provisioner
	I1014 08:47:32.331481   15224 command_runner.go:130] > e83db276dec37       60c005f310ff3                                                                                         About a minute ago   Running             kube-proxy                1                   6f8bdf552734e       kube-proxy-r74dx
	I1014 08:47:32.331570   15224 command_runner.go:130] > 48c8492e231e1       2e96e5913fc06                                                                                         About a minute ago   Running             etcd                      0                   0697a11790e80       etcd-multinode-671000
	I1014 08:47:32.331685   15224 command_runner.go:130] > 8af48c446f7e1       175ffd71cce3d                                                                                         About a minute ago   Running             kube-controller-manager   1                   7bd4c36606eef       kube-controller-manager-multinode-671000
	I1014 08:47:32.331792   15224 command_runner.go:130] > a834664fc8b80       6bab7719df100                                                                                         About a minute ago   Running             kube-apiserver            0                   6155e8be2d5d7       kube-apiserver-multinode-671000
	I1014 08:47:32.331848   15224 command_runner.go:130] > d428685276e1e       9aa1fad941575                                                                                         About a minute ago   Running             kube-scheduler            1                   1d3033f871fb1       kube-scheduler-multinode-671000
	I1014 08:47:32.331908   15224 command_runner.go:130] > cbf0b40e378b9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   06e529266db4b       busybox-7dff88458-vlp7j
	I1014 08:47:32.331908   15224 command_runner.go:130] > d9831e9f8ce8c       c69fa2e9cbf5f                                                                                         24 minutes ago       Exited              coredns                   0                   2f8cc9a218fef       coredns-7c65d6cfc9-fs9ct
	I1014 08:47:32.331973   15224 command_runner.go:130] > fcdf89a3ac8ce       kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387              24 minutes ago       Exited              kindnet-cni               0                   5e48ddcfdf90a       kindnet-wqrx6
	I1014 08:47:32.331999   15224 command_runner.go:130] > ea19428d70363       60c005f310ff3                                                                                         24 minutes ago       Exited              kube-proxy                0                   7144d8ce208cf       kube-proxy-r74dx
	I1014 08:47:32.331999   15224 command_runner.go:130] > 661e75bbf6b46       9aa1fad941575                                                                                         24 minutes ago       Exited              kube-scheduler            0                   2dc78387553ff       kube-scheduler-multinode-671000
	I1014 08:47:32.332082   15224 command_runner.go:130] > 712aad669c9f6       175ffd71cce3d                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   bfdde08319e32       kube-controller-manager-multinode-671000
	I1014 08:47:32.334192   15224 logs.go:123] Gathering logs for kube-apiserver [a834664fc8b8] ...
	I1014 08:47:32.334798   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a834664fc8b8"
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:12.133612       1 options.go:228] external host was not specified, using 172.20.106.123
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:12.139596       1 server.go:142] Version: v1.31.1
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:12.140322       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:13.070213       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:13.112422       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:13.116622       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:13.116890       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:13.117611       1 instance.go:232] Using reconciler: lease
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:13.606403       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I1014 08:47:32.362254   15224 command_runner.go:130] ! W1014 15:46:13.606961       1 genericapiserver.go:765] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:13.910757       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:13.911096       1 apis.go:105] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:14.140196       1 apis.go:105] API group "storagemigration.k8s.io" is not enabled, skipping.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:14.332586       1 apis.go:105] API group "resource.k8s.io" is not enabled, skipping.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:14.344695       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I1014 08:47:32.362254   15224 command_runner.go:130] ! W1014 15:46:14.344792       1 genericapiserver.go:765] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! W1014 15:46:14.344802       1 genericapiserver.go:765] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:14.345547       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I1014 08:47:32.362254   15224 command_runner.go:130] ! W1014 15:46:14.345645       1 genericapiserver.go:765] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:14.346729       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:14.348142       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I1014 08:47:32.362254   15224 command_runner.go:130] ! W1014 15:46:14.348261       1 genericapiserver.go:765] Skipping API autoscaling/v2beta1 because it has no resources.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! W1014 15:46:14.348272       1 genericapiserver.go:765] Skipping API autoscaling/v2beta2 because it has no resources.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:14.350632       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I1014 08:47:32.362254   15224 command_runner.go:130] ! W1014 15:46:14.350741       1 genericapiserver.go:765] Skipping API batch/v1beta1 because it has no resources.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:14.352378       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I1014 08:47:32.362254   15224 command_runner.go:130] ! W1014 15:46:14.352489       1 genericapiserver.go:765] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! W1014 15:46:14.352501       1 genericapiserver.go:765] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:14.353674       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I1014 08:47:32.362254   15224 command_runner.go:130] ! W1014 15:46:14.353813       1 genericapiserver.go:765] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! W1014 15:46:14.353843       1 genericapiserver.go:765] Skipping API coordination.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:14.355117       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I1014 08:47:32.362254   15224 command_runner.go:130] ! W1014 15:46:14.355256       1 genericapiserver.go:765] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:14.358401       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.358517       1 genericapiserver.go:765] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.358528       1 genericapiserver.go:765] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:14.359534       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.359632       1 genericapiserver.go:765] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.359643       1 genericapiserver.go:765] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:14.360836       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.360942       1 genericapiserver.go:765] Skipping API policy/v1beta1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:14.363702       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.363848       1 genericapiserver.go:765] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.363860       1 genericapiserver.go:765] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:14.364685       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.364801       1 genericapiserver.go:765] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.364812       1 genericapiserver.go:765] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:14.368101       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.368216       1 genericapiserver.go:765] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.368228       1 genericapiserver.go:765] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:14.370008       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:14.371702       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.371808       1 genericapiserver.go:765] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.371818       1 genericapiserver.go:765] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:14.376771       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.376868       1 genericapiserver.go:765] Skipping API apps/v1beta2 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.376877       1 genericapiserver.go:765] Skipping API apps/v1beta1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:14.379998       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.380101       1 genericapiserver.go:765] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.380112       1 genericapiserver.go:765] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:14.380956       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.381059       1 genericapiserver.go:765] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:14.395072       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.395116       1 genericapiserver.go:765] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.014537       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.014702       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.016123       1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.016823       1 secure_serving.go:213] Serving securely on [::]:8443
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.017426       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.018450       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.018766       1 remote_available_controller.go:411] Starting RemoteAvailability controller
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.018850       1 cache.go:32] Waiting for caches to sync for RemoteAvailability controller
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.018985       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.021391       1 controller.go:119] Starting legacy_token_tracking_controller
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.021471       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.021517       1 aggregator.go:169] waiting for initial CRD sync...
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.022050       1 dynamic_serving_content.go:135] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.022573       1 cluster_authentication_trust_controller.go:443] Starting cluster_authentication_trust_controller controller
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.022688       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.022775       1 system_namespaces_controller.go:66] Starting system namespaces controller
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.026778       1 apf_controller.go:377] Starting API Priority and Fairness config controller
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.027043       1 controller.go:78] Starting OpenAPI AggregationController
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.027942       1 customresource_discovery_controller.go:292] Starting DiscoveryController
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.029402       1 local_available_controller.go:156] Starting LocalAvailability controller
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.029447       1 cache.go:32] Waiting for caches to sync for LocalAvailability controller
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.029815       1 apiservice_controller.go:100] Starting APIServiceRegistrationController
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.029850       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.034040       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.034136       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.034690       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.034946       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.082229       1 controller.go:142] Starting OpenAPI controller
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.083838       1 controller.go:90] Starting OpenAPI V3 controller
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.083894       1 naming_controller.go:294] Starting NamingConditionController
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.086443       1 establishing_controller.go:81] Starting EstablishingController
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.087455       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.088333       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.092677       1 crd_finalizer.go:269] Starting CRDFinalizer
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.212597       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.212691       1 policy_source.go:224] refreshing policies
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.221529       1 shared_informer.go:320] Caches are synced for configmaps
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.226910       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.227013       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.229937       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.231898       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.233234       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.234375       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.235151       1 aggregator.go:171] initial CRD sync complete...
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.235400       1 autoregister_controller.go:144] Starting autoregister controller
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.235712       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.235936       1 cache.go:39] Caches are synced for autoregister controller
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.255261       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.256039       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.271561       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.319091       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:16.036564       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 08:47:32.364266   15224 command_runner.go:130] ! W1014 15:46:16.558489       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.20.100.167 172.20.106.123]
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:16.560272       1 controller.go:615] quota admission added evaluator for: endpoints
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:16.573015       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:18.229365       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:18.748102       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:18.793266       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:18.985788       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:19.024530       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 08:47:32.364266   15224 command_runner.go:130] ! W1014 15:46:36.563040       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.20.106.123]
	I1014 08:47:32.371241   15224 logs.go:123] Gathering logs for coredns [d9831e9f8ce8] ...
	I1014 08:47:32.371241   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9831e9f8ce8"
	I1014 08:47:32.404873   15224 command_runner.go:130] > .:53
	I1014 08:47:32.404873   15224 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	I1014 08:47:32.404873   15224 command_runner.go:130] > CoreDNS-1.11.3
	I1014 08:47:32.404959   15224 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I1014 08:47:32.404959   15224 command_runner.go:130] > [INFO] 127.0.0.1:35483 - 39257 "HINFO IN 8382239991273371198.8905610076788717940. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.074337261s
	I1014 08:47:32.404959   15224 command_runner.go:130] > [INFO] 10.244.1.2:36950 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003062s
	I1014 08:47:32.404959   15224 command_runner.go:130] > [INFO] 10.244.1.2:49277 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.118118924s
	I1014 08:47:32.404959   15224 command_runner.go:130] > [INFO] 10.244.1.2:33122 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.153089702s
	I1014 08:47:32.404959   15224 command_runner.go:130] > [INFO] 10.244.1.2:44549 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.188160849s
	I1014 08:47:32.404959   15224 command_runner.go:130] > [INFO] 10.244.0.3:43390 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191s
	I1014 08:47:32.404959   15224 command_runner.go:130] > [INFO] 10.244.0.3:59817 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000279499s
	I1014 08:47:32.405061   15224 command_runner.go:130] > [INFO] 10.244.0.3:34294 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.0002004s
	I1014 08:47:32.405102   15224 command_runner.go:130] > [INFO] 10.244.0.3:56220 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0002257s
	I1014 08:47:32.405102   15224 command_runner.go:130] > [INFO] 10.244.1.2:44291 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002098s
	I1014 08:47:32.405102   15224 command_runner.go:130] > [INFO] 10.244.1.2:42361 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.17965629s
	I1014 08:47:32.405102   15224 command_runner.go:130] > [INFO] 10.244.1.2:48756 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002923s
	I1014 08:47:32.405207   15224 command_runner.go:130] > [INFO] 10.244.1.2:53437 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000274799s
	I1014 08:47:32.405232   15224 command_runner.go:130] > [INFO] 10.244.1.2:60026 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.013560692s
	I1014 08:47:32.405232   15224 command_runner.go:130] > [INFO] 10.244.1.2:39241 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001752s
	I1014 08:47:32.405232   15224 command_runner.go:130] > [INFO] 10.244.1.2:36696 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0003084s
	I1014 08:47:32.405232   15224 command_runner.go:130] > [INFO] 10.244.1.2:51603 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001109s
	I1014 08:47:32.405319   15224 command_runner.go:130] > [INFO] 10.244.0.3:37516 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002057s
	I1014 08:47:32.405319   15224 command_runner.go:130] > [INFO] 10.244.0.3:41090 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001329s
	I1014 08:47:32.405319   15224 command_runner.go:130] > [INFO] 10.244.0.3:45330 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001397s
	I1014 08:47:32.405397   15224 command_runner.go:130] > [INFO] 10.244.0.3:46520 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000154699s
	I1014 08:47:32.405419   15224 command_runner.go:130] > [INFO] 10.244.0.3:40342 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001347s
	I1014 08:47:32.405419   15224 command_runner.go:130] > [INFO] 10.244.0.3:60385 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001518s
	I1014 08:47:32.405419   15224 command_runner.go:130] > [INFO] 10.244.0.3:56338 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001527s
	I1014 08:47:32.405419   15224 command_runner.go:130] > [INFO] 10.244.0.3:50480 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0002505s
	I1014 08:47:32.405500   15224 command_runner.go:130] > [INFO] 10.244.1.2:60726 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002297s
	I1014 08:47:32.405522   15224 command_runner.go:130] > [INFO] 10.244.1.2:44347 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002249s
	I1014 08:47:32.405522   15224 command_runner.go:130] > [INFO] 10.244.1.2:49092 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001075s
	I1014 08:47:32.405586   15224 command_runner.go:130] > [INFO] 10.244.1.2:54937 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068s
	I1014 08:47:32.405615   15224 command_runner.go:130] > [INFO] 10.244.0.3:59749 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002552s
	I1014 08:47:32.405615   15224 command_runner.go:130] > [INFO] 10.244.0.3:56008 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002499s
	I1014 08:47:32.405615   15224 command_runner.go:130] > [INFO] 10.244.0.3:60338 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001065s
	I1014 08:47:32.405615   15224 command_runner.go:130] > [INFO] 10.244.0.3:49712 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001634s
	I1014 08:47:32.405735   15224 command_runner.go:130] > [INFO] 10.244.1.2:56508 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001577s
	I1014 08:47:32.405795   15224 command_runner.go:130] > [INFO] 10.244.1.2:45387 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001545s
	I1014 08:47:32.405795   15224 command_runner.go:130] > [INFO] 10.244.1.2:39608 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001419s
	I1014 08:47:32.405795   15224 command_runner.go:130] > [INFO] 10.244.1.2:53878 - 5 "PTR IN 1.96.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0001923s
	I1014 08:47:32.405795   15224 command_runner.go:130] > [INFO] 10.244.0.3:58589 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002828s
	I1014 08:47:32.405892   15224 command_runner.go:130] > [INFO] 10.244.0.3:58608 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001609s
	I1014 08:47:32.405892   15224 command_runner.go:130] > [INFO] 10.244.0.3:52599 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000633s
	I1014 08:47:32.405892   15224 command_runner.go:130] > [INFO] 10.244.0.3:58233 - 5 "PTR IN 1.96.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0001058s
	I1014 08:47:32.405956   15224 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I1014 08:47:32.405956   15224 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I1014 08:47:32.409048   15224 logs.go:123] Gathering logs for kube-scheduler [d428685276e1] ...
	I1014 08:47:32.409048   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d428685276e1"
	I1014 08:47:32.436500   15224 command_runner.go:130] ! I1014 15:46:12.515594       1 serving.go:386] Generated self-signed cert in-memory
	I1014 08:47:32.437069   15224 command_runner.go:130] ! W1014 15:46:15.152686       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I1014 08:47:32.437069   15224 command_runner.go:130] ! W1014 15:46:15.152818       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I1014 08:47:32.437069   15224 command_runner.go:130] ! W1014 15:46:15.152851       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	I1014 08:47:32.437178   15224 command_runner.go:130] ! W1014 15:46:15.153007       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 08:47:32.437178   15224 command_runner.go:130] ! I1014 15:46:15.250163       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1014 08:47:32.437250   15224 command_runner.go:130] ! I1014 15:46:15.250420       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:32.437250   15224 command_runner.go:130] ! I1014 15:46:15.258344       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1014 08:47:32.437319   15224 command_runner.go:130] ! I1014 15:46:15.258735       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 08:47:32.437347   15224 command_runner.go:130] ! I1014 15:46:15.263966       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 08:47:32.437380   15224 command_runner.go:130] ! I1014 15:46:15.258753       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 08:47:32.437405   15224 command_runner.go:130] ! I1014 15:46:15.365145       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 08:47:32.439893   15224 logs.go:123] Gathering logs for kube-proxy [e83db276dec3] ...
	I1014 08:47:32.439974   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83db276dec3"
	I1014 08:47:32.471190   15224 command_runner.go:130] ! I1014 15:46:17.821967       1 server_linux.go:66] "Using iptables proxy"
	I1014 08:47:32.471190   15224 command_runner.go:130] ! E1014 15:46:17.985243       1 proxier.go:734] "Error cleaning up nftables rules" err=<
	I1014 08:47:32.471291   15224 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I1014 08:47:32.471291   15224 command_runner.go:130] ! 	add table ip kube-proxy
	I1014 08:47:32.471291   15224 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I1014 08:47:32.471291   15224 command_runner.go:130] !  >
	I1014 08:47:32.471291   15224 command_runner.go:130] ! E1014 15:46:18.020523       1 proxier.go:734] "Error cleaning up nftables rules" err=<
	I1014 08:47:32.471291   15224 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I1014 08:47:32.471354   15224 command_runner.go:130] ! 	add table ip6 kube-proxy
	I1014 08:47:32.471354   15224 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I1014 08:47:32.471354   15224 command_runner.go:130] !  >
	I1014 08:47:32.471354   15224 command_runner.go:130] ! I1014 15:46:18.173230       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.20.106.123"]
	I1014 08:47:32.471450   15224 command_runner.go:130] ! E1014 15:46:18.173392       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 08:47:32.471514   15224 command_runner.go:130] ! I1014 15:46:18.286207       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 08:47:32.471534   15224 command_runner.go:130] ! I1014 15:46:18.287289       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 08:47:32.471534   15224 command_runner.go:130] ! I1014 15:46:18.287905       1 server_linux.go:169] "Using iptables Proxier"
	I1014 08:47:32.471597   15224 command_runner.go:130] ! I1014 15:46:18.293792       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 08:47:32.471622   15224 command_runner.go:130] ! I1014 15:46:18.300740       1 server.go:483] "Version info" version="v1.31.1"
	I1014 08:47:32.471652   15224 command_runner.go:130] ! I1014 15:46:18.300778       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:32.471652   15224 command_runner.go:130] ! I1014 15:46:18.305824       1 config.go:199] "Starting service config controller"
	I1014 08:47:32.471652   15224 command_runner.go:130] ! I1014 15:46:18.308209       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 08:47:32.471652   15224 command_runner.go:130] ! I1014 15:46:18.308868       1 config.go:328] "Starting node config controller"
	I1014 08:47:32.471652   15224 command_runner.go:130] ! I1014 15:46:18.314183       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 08:47:32.471652   15224 command_runner.go:130] ! I1014 15:46:18.309398       1 config.go:105] "Starting endpoint slice config controller"
	I1014 08:47:32.471652   15224 command_runner.go:130] ! I1014 15:46:18.317842       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 08:47:32.471652   15224 command_runner.go:130] ! I1014 15:46:18.419882       1 shared_informer.go:320] Caches are synced for node config
	I1014 08:47:32.471652   15224 command_runner.go:130] ! I1014 15:46:18.419918       1 shared_informer.go:320] Caches are synced for service config
	I1014 08:47:32.471652   15224 command_runner.go:130] ! I1014 15:46:18.435586       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 08:47:32.474216   15224 logs.go:123] Gathering logs for kindnet [fcdf89a3ac8c] ...
	I1014 08:47:32.474216   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcdf89a3ac8c"
	I1014 08:47:32.503247   15224 command_runner.go:130] ! I1014 15:32:44.862261       1 main.go:300] handling current node
	I1014 08:47:32.503247   15224 command_runner.go:130] ! I1014 15:32:44.862301       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.503942   15224 command_runner.go:130] ! I1014 15:32:44.862313       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.503980   15224 command_runner.go:130] ! I1014 15:32:44.862605       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.504770   15224 command_runner.go:130] ! I1014 15:32:44.862636       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.504770   15224 command_runner.go:130] ! I1014 15:32:54.862103       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.504861   15224 command_runner.go:130] ! I1014 15:32:54.862232       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.509199   15224 command_runner.go:130] ! I1014 15:32:54.862979       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.509342   15224 command_runner.go:130] ! I1014 15:32:54.863013       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.509707   15224 command_runner.go:130] ! I1014 15:32:54.863219       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:32:54.863233       1 main.go:300] handling current node
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:04.864377       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:04.864510       1 main.go:300] handling current node
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:04.864534       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:04.864544       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:04.864795       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:04.864807       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:14.870098       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:14.870279       1 main.go:300] handling current node
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:14.870319       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:14.870394       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:14.872221       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:14.872265       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:24.862168       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:24.862234       1 main.go:300] handling current node
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:24.862290       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:24.862303       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:24.862799       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:24.862950       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.511132   15224 command_runner.go:130] ! I1014 15:33:34.870712       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.511132   15224 command_runner.go:130] ! I1014 15:33:34.870952       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.511196   15224 command_runner.go:130] ! I1014 15:33:34.871749       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.511196   15224 command_runner.go:130] ! I1014 15:33:34.871848       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.511196   15224 command_runner.go:130] ! I1014 15:33:34.872312       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.511245   15224 command_runner.go:130] ! I1014 15:33:34.872409       1 main.go:300] handling current node
	I1014 08:47:32.511245   15224 command_runner.go:130] ! I1014 15:33:44.868271       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.511245   15224 command_runner.go:130] ! I1014 15:33:44.868442       1 main.go:300] handling current node
	I1014 08:47:32.511245   15224 command_runner.go:130] ! I1014 15:33:44.868482       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.511245   15224 command_runner.go:130] ! I1014 15:33:44.868509       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.511245   15224 command_runner.go:130] ! I1014 15:33:44.869165       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.511245   15224 command_runner.go:130] ! I1014 15:33:44.869252       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.511245   15224 command_runner.go:130] ! I1014 15:33:54.862162       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.511793   15224 command_runner.go:130] ! I1014 15:33:54.862365       1 main.go:300] handling current node
	I1014 08:47:32.511793   15224 command_runner.go:130] ! I1014 15:33:54.862404       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.511985   15224 command_runner.go:130] ! I1014 15:33:54.862429       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512015   15224 command_runner.go:130] ! I1014 15:33:54.862766       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:33:54.862800       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:04.870860       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:04.870993       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:04.871751       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:04.871830       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:04.872365       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:04.872444       1 main.go:300] handling current node
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:14.868274       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:14.868410       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:14.869151       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:14.869244       1 main.go:300] handling current node
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:14.869263       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:14.869271       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:24.869326       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:24.869383       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:24.870365       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:24.870464       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:24.871197       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:24.871235       1 main.go:300] handling current node
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:34.862280       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:34.862387       1 main.go:300] handling current node
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:34.862420       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:34.862440       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:34.862809       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:34.862844       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:44.870611       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:44.870703       1 main.go:300] handling current node
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:44.870732       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:44.870826       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:44.871348       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:44.871437       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:54.862260       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:54.862358       1 main.go:300] handling current node
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:54.862379       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:54.862388       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:54.862782       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:54.862862       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:35:04.871418       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:35:04.871489       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:35:04.872322       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:35:04.872416       1 main.go:300] handling current node
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:35:04.872437       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:35:04.872445       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:35:14.870301       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:35:14.870413       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:35:14.870922       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:35:14.870941       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:35:14.871055       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:14.871086       1 main.go:300] handling current node
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:24.870776       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:24.870814       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:24.871449       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:24.871682       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:24.872057       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:24.872149       1 main.go:300] handling current node
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:34.871155       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:34.871422       1 main.go:300] handling current node
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:34.876612       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:34.876630       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:34.876808       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:34.876817       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:44.872405       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:44.872450       1 main.go:300] handling current node
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:44.872467       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:44.872473       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:44.873120       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:44.873155       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:54.862113       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:54.862220       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:54.862608       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:54.862725       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:54.862993       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:54.863089       1 main.go:300] handling current node
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:04.870594       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:04.870634       1 main.go:300] handling current node
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:04.870705       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:04.870719       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:04.871246       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:04.871261       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:14.862194       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:14.862337       1 main.go:300] handling current node
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:14.862361       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:14.862370       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:14.863024       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:14.863053       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:24.870839       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:24.871114       1 main.go:300] handling current node
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:24.871303       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:24.871618       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:24.872052       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:24.872164       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:34.870320       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:34.870375       1 main.go:300] handling current node
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:34.870396       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:34.870404       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:34.870774       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:34.870810       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:44.864305       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:44.864530       1 main.go:300] handling current node
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:44.864616       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:44.864683       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:44.865206       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:44.865241       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:54.862701       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:54.862834       1 main.go:300] handling current node
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:54.862940       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:54.863054       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:54.864321       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:54.864397       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:04.863761       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:04.863854       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:04.864505       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:04.864638       1 main.go:300] handling current node
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:04.864656       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:04.864664       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:14.866293       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:14.866653       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:14.867034       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:14.867067       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:14.867179       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:14.867247       1 main.go:300] handling current node
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:24.867969       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:24.868019       1 main.go:300] handling current node
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:24.868036       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:24.868043       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:24.868511       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:24.868549       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:34.863786       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:34.864224       1 main.go:300] handling current node
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:34.864384       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:34.864448       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:34.864771       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:34.864865       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:44.871310       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:44.871352       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:44.871803       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:44.871837       1 main.go:300] handling current node
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:44.871852       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:44.871859       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:54.862573       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:54.862694       1 main.go:300] handling current node
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:54.862714       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:54.862723       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:54.863288       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:54.863364       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:38:04.872124       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:38:04.872285       1 main.go:300] handling current node
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:38:04.872330       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:38:04.872343       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:38:04.873184       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:38:04.873352       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:38:14.863654       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:38:14.863788       1 main.go:300] handling current node
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:38:14.863812       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:38:14.863822       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:14.864488       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:14.864585       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:24.868537       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:24.868643       1 main.go:300] handling current node
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:24.868664       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:24.868672       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:24.869258       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:24.869347       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:34.864233       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:34.864469       1 main.go:300] handling current node
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:34.864497       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:34.864509       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:34.865023       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:34.865061       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:44.870754       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:44.870859       1 main.go:300] handling current node
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:44.870919       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:44.870931       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:44.871124       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:44.871155       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:54.862849       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:54.863008       1 main.go:300] handling current node
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:54.863029       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:54.863040       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:54.863313       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:54.863343       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:04.861865       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:04.862353       1 main.go:300] handling current node
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:04.862819       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:04.863053       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:04.863648       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:04.865127       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:14.870473       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:14.870526       1 main.go:300] handling current node
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:14.870544       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:14.870551       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:14.871123       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:14.871161       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:24.862264       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:24.862304       1 main.go:300] handling current node
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:24.862323       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:24.862331       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:24.863326       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:24.863417       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:34.862868       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:34.863041       1 main.go:300] handling current node
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:34.863063       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:34.863072       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:34.863370       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:34.863460       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:44.872051       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:44.872175       1 main.go:300] handling current node
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:44.872198       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:44.872392       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:44.873038       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:44.873160       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:54.862953       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:54.862990       1 main.go:300] handling current node
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:54.863013       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:54.863022       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:54.863377       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:54.863412       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:40:04.864160       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:40:04.864198       1 main.go:300] handling current node
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:40:04.864216       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:04.864222       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:04.864390       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:04.864399       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:14.862864       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:14.863081       1 main.go:300] handling current node
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:14.863442       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:14.863496       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:14.864019       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:14.864052       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:24.867383       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:24.867717       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:24.868487       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:24.868619       1 main.go:300] handling current node
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:24.868640       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:24.868650       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:34.866060       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:34.866194       1 main.go:300] handling current node
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:34.866224       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:34.866240       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:34.867632       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:34.867868       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:44.875002       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:44.875336       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:44.875792       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:44.875991       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:44.876302       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:44.876531       1 main.go:300] handling current node
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:54.862504       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:54.862640       1 main.go:300] handling current node
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:54.862766       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:54.862834       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:54.863108       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:54.863140       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:04.863181       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:04.863304       1 main.go:300] handling current node
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:04.863326       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:04.863335       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:04.863824       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:04.863963       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:14.868270       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:14.868443       1 main.go:300] handling current node
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:14.868487       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:14.868541       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:14.868808       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:14.868843       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:24.862261       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:24.862508       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:24.863242       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:24.863792       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:24.864172       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:24.864327       1 main.go:300] handling current node
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:34.862294       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:34.862355       1 main.go:300] handling current node
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:34.862377       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:34.862385       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:44.862674       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:44.862799       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:44.863254       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.20.102.29 Flags: [] Table: 0 Realm: 0} 
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:44.863509       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:44.863768       1 main.go:300] handling current node
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:44.863945       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:44.864052       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:54.862083       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:54.862208       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:54.862577       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:54.862723       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:54.863005       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:54.863097       1 main.go:300] handling current node
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:04.870504       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:04.871039       1 main.go:300] handling current node
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:04.871167       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:04.871277       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:04.871721       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:04.871740       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:14.862252       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:14.862455       1 main.go:300] handling current node
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:14.862499       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:14.862521       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:14.863189       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:14.863224       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:24.862819       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:24.863072       1 main.go:300] handling current node
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:24.863093       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:24.863103       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:24.864093       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:24.864136       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:34.863373       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:34.863425       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:34.863670       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:34.863742       1 main.go:300] handling current node
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:34.863763       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:34.863771       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:44.861842       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:42:44.862176       1 main.go:300] handling current node
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:42:44.862271       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:42:44.862357       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:42:44.862743       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:42:44.863009       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:42:54.863140       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:42:54.863181       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:42:54.863865       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:42:54.864051       1 main.go:300] handling current node
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:42:54.864417       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:42:54.864427       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:04.862539       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:04.862625       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:04.863289       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:04.863395       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:04.863612       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:04.863764       1 main.go:300] handling current node
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:14.871242       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:14.871727       1 main.go:300] handling current node
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:14.871818       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:14.871846       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:14.872085       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:14.872201       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:24.871405       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:24.871540       1 main.go:300] handling current node
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:24.871566       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:24.871575       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:24.871835       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:24.872193       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:34.863042       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:34.863237       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:34.863962       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:34.864059       1 main.go:300] handling current node
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:34.864077       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:34.864085       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:44.871016       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:44.871057       1 main.go:300] handling current node
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:44.871074       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.518872   15224 command_runner.go:130] ! I1014 15:43:44.871081       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.518872   15224 command_runner.go:130] ! I1014 15:43:44.871299       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:32.518872   15224 command_runner.go:130] ! I1014 15:43:44.871310       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:35.038442   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods
	I1014 08:47:35.038539   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:35.038539   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:35.038539   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:35.043918   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:35.043918   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:35.043918   15224 round_trippers.go:580]     Audit-Id: 4ce14b73-b264-4a50-b726-0118663ea6b7
	I1014 08:47:35.043918   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:35.043918   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:35.044631   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:35.044631   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:35.044631   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:35 GMT
	I1014 08:47:35.050311   15224 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2017"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"2001","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90019 chars]
	I1014 08:47:35.054886   15224 system_pods.go:59] 12 kube-system pods found
	I1014 08:47:35.054886   15224 system_pods.go:61] "coredns-7c65d6cfc9-fs9ct" [fd736862-9e3e-4a3d-9a86-08efd2338477] Running
	I1014 08:47:35.054886   15224 system_pods.go:61] "etcd-multinode-671000" [098aece2-cb2c-470a-878a-872417e4387f] Running
	I1014 08:47:35.054886   15224 system_pods.go:61] "kindnet-5rqxq" [480b1f88-eb32-4638-9834-2be17b8d35ed] Running
	I1014 08:47:35.054886   15224 system_pods.go:61] "kindnet-rgbjf" [445ff184-85e8-4153-a3d0-a0185c4f95de] Running
	I1014 08:47:35.054886   15224 system_pods.go:61] "kindnet-wqrx6" [a508bbf9-7565-4c73-98cb-e9684985c298] Running
	I1014 08:47:35.054886   15224 system_pods.go:61] "kube-apiserver-multinode-671000" [64595feb-e6e8-4e69-a4b7-6459d15e3beb] Running
	I1014 08:47:35.054886   15224 system_pods.go:61] "kube-controller-manager-multinode-671000" [a5c7bb80-c844-476f-ba47-1cd4e599b92d] Running
	I1014 08:47:35.054886   15224 system_pods.go:61] "kube-proxy-kbpjf" [004b7f38-fa3b-4c2c-9524-8d5b1ba514e9] Running
	I1014 08:47:35.054886   15224 system_pods.go:61] "kube-proxy-n6txs" [796a44f9-2067-438d-9359-34d5f968c861] Running
	I1014 08:47:35.054886   15224 system_pods.go:61] "kube-proxy-r74dx" [f8d14473-8859-4015-84e9-d00656cc00c9] Running
	I1014 08:47:35.054886   15224 system_pods.go:61] "kube-scheduler-multinode-671000" [97febcab-f54d-4338-ba7c-2dc5e69b77fc] Running
	I1014 08:47:35.054886   15224 system_pods.go:61] "storage-provisioner" [fde8ff75-bc7f-4db4-b098-c3a08b38d205] Running
	I1014 08:47:35.054886   15224 system_pods.go:74] duration metric: took 3.7148124s to wait for pod list to return data ...
	I1014 08:47:35.054886   15224 default_sa.go:34] waiting for default service account to be created ...
	I1014 08:47:35.054886   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/default/serviceaccounts
	I1014 08:47:35.054886   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:35.054886   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:35.054886   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:35.059242   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:35.059242   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:35.059242   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:35 GMT
	I1014 08:47:35.059242   15224 round_trippers.go:580]     Audit-Id: 2923cb0a-1308-40bf-887d-7a385272b091
	I1014 08:47:35.059242   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:35.059242   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:35.059242   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:35.059242   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:35.059242   15224 round_trippers.go:580]     Content-Length: 262
	I1014 08:47:35.059242   15224 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"2017"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2d7618c1-d4b9-4719-9d93-d87bd887238a","resourceVersion":"332","creationTimestamp":"2024-10-14T15:22:44Z"}}]}
	I1014 08:47:35.059242   15224 default_sa.go:45] found service account: "default"
	I1014 08:47:35.059242   15224 default_sa.go:55] duration metric: took 4.3564ms for default service account to be created ...
	I1014 08:47:35.059242   15224 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 08:47:35.059242   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods
	I1014 08:47:35.059242   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:35.059242   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:35.059242   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:35.064097   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:35.064191   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:35.064191   15224 round_trippers.go:580]     Audit-Id: 65e353bf-4f4a-4191-843e-20cc4e13fb38
	I1014 08:47:35.064191   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:35.064191   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:35.064191   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:35.064191   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:35.064191   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:35 GMT
	I1014 08:47:35.065155   15224 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2017"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"2001","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90019 chars]
	I1014 08:47:35.069236   15224 system_pods.go:86] 12 kube-system pods found
	I1014 08:47:35.069236   15224 system_pods.go:89] "coredns-7c65d6cfc9-fs9ct" [fd736862-9e3e-4a3d-9a86-08efd2338477] Running
	I1014 08:47:35.069236   15224 system_pods.go:89] "etcd-multinode-671000" [098aece2-cb2c-470a-878a-872417e4387f] Running
	I1014 08:47:35.069236   15224 system_pods.go:89] "kindnet-5rqxq" [480b1f88-eb32-4638-9834-2be17b8d35ed] Running
	I1014 08:47:35.069236   15224 system_pods.go:89] "kindnet-rgbjf" [445ff184-85e8-4153-a3d0-a0185c4f95de] Running
	I1014 08:47:35.069236   15224 system_pods.go:89] "kindnet-wqrx6" [a508bbf9-7565-4c73-98cb-e9684985c298] Running
	I1014 08:47:35.069236   15224 system_pods.go:89] "kube-apiserver-multinode-671000" [64595feb-e6e8-4e69-a4b7-6459d15e3beb] Running
	I1014 08:47:35.069236   15224 system_pods.go:89] "kube-controller-manager-multinode-671000" [a5c7bb80-c844-476f-ba47-1cd4e599b92d] Running
	I1014 08:47:35.069236   15224 system_pods.go:89] "kube-proxy-kbpjf" [004b7f38-fa3b-4c2c-9524-8d5b1ba514e9] Running
	I1014 08:47:35.069236   15224 system_pods.go:89] "kube-proxy-n6txs" [796a44f9-2067-438d-9359-34d5f968c861] Running
	I1014 08:47:35.069236   15224 system_pods.go:89] "kube-proxy-r74dx" [f8d14473-8859-4015-84e9-d00656cc00c9] Running
	I1014 08:47:35.069236   15224 system_pods.go:89] "kube-scheduler-multinode-671000" [97febcab-f54d-4338-ba7c-2dc5e69b77fc] Running
	I1014 08:47:35.069236   15224 system_pods.go:89] "storage-provisioner" [fde8ff75-bc7f-4db4-b098-c3a08b38d205] Running
	I1014 08:47:35.069236   15224 system_pods.go:126] duration metric: took 9.994ms to wait for k8s-apps to be running ...
	I1014 08:47:35.069236   15224 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 08:47:35.078776   15224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 08:47:35.103924   15224 system_svc.go:56] duration metric: took 34.6881ms WaitForService to wait for kubelet
	I1014 08:47:35.103924   15224 kubeadm.go:582] duration metric: took 1m14.4277825s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 08:47:35.103924   15224 node_conditions.go:102] verifying NodePressure condition ...
	I1014 08:47:35.103924   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes
	I1014 08:47:35.103924   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:35.103924   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:35.103924   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:35.108233   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:35.108276   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:35.108276   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:35.108276   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:35.108276   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:35.108276   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:35.108276   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:35 GMT
	I1014 08:47:35.108276   15224 round_trippers.go:580]     Audit-Id: 552f978c-3f4d-48f3-9401-2216552da7f9
	I1014 08:47:35.108276   15224 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2017"},"items":[{"metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16260 chars]
	I1014 08:47:35.109662   15224 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 08:47:35.109662   15224 node_conditions.go:123] node cpu capacity is 2
	I1014 08:47:35.109662   15224 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 08:47:35.109785   15224 node_conditions.go:123] node cpu capacity is 2
	I1014 08:47:35.109785   15224 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 08:47:35.109785   15224 node_conditions.go:123] node cpu capacity is 2
	I1014 08:47:35.109785   15224 node_conditions.go:105] duration metric: took 5.8607ms to run NodePressure ...
	I1014 08:47:35.109785   15224 start.go:241] waiting for startup goroutines ...
	I1014 08:47:35.109785   15224 start.go:246] waiting for cluster config update ...
	I1014 08:47:35.109785   15224 start.go:255] writing updated cluster config ...
	I1014 08:47:35.114765   15224 out.go:201] 
	I1014 08:47:35.130819   15224 config.go:182] Loaded profile config "multinode-671000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 08:47:35.130819   15224 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\config.json ...
	I1014 08:47:35.137539   15224 out.go:177] * Starting "multinode-671000-m02" worker node in "multinode-671000" cluster
	I1014 08:47:35.140093   15224 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 08:47:35.140093   15224 cache.go:56] Caching tarball of preloaded images
	I1014 08:47:35.141226   15224 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1014 08:47:35.141295   15224 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 08:47:35.141295   15224 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\config.json ...
	I1014 08:47:35.144478   15224 start.go:360] acquireMachinesLock for multinode-671000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 08:47:35.144686   15224 start.go:364] duration metric: took 139.1µs to acquireMachinesLock for "multinode-671000-m02"
	I1014 08:47:35.144879   15224 start.go:96] Skipping create...Using existing machine configuration
	I1014 08:47:35.144922   15224 fix.go:54] fixHost starting: m02
	I1014 08:47:35.145418   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:47:37.290609   15224 main.go:141] libmachine: [stdout =====>] : Off
	
	I1014 08:47:37.290668   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:47:37.290735   15224 fix.go:112] recreateIfNeeded on multinode-671000-m02: state=Stopped err=<nil>
	W1014 08:47:37.290735   15224 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 08:47:37.294756   15224 out.go:177] * Restarting existing hyperv VM for "multinode-671000-m02" ...
	I1014 08:47:37.297201   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-671000-m02
	I1014 08:47:40.963581   15224 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:47:40.963581   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:47:40.963581   15224 main.go:141] libmachine: Waiting for host to start...
	I1014 08:47:40.963676   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:47:43.167324   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:47:43.167324   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:47:43.167510   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:47:45.616967   15224 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:47:45.617759   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:47:46.617958   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:47:48.784401   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:47:48.784401   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:47:48.784401   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:47:51.314931   15224 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:47:51.314931   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:47:52.315174   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:47:54.453343   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:47:54.453343   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:47:54.453343   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:47:56.903430   15224 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:47:56.903430   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:47:57.903977   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:48:00.069797   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:48:00.069900   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:00.069975   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:48:02.538439   15224 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:48:02.538439   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:03.539297   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:48:05.690154   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:48:05.690154   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:05.690154   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:48:08.243357   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:48:08.243357   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:08.248402   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:48:10.335432   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:48:10.335432   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:10.335432   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:48:12.806239   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:48:12.807330   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:12.807330   15224 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\config.json ...
	I1014 08:48:12.810508   15224 machine.go:93] provisionDockerMachine start ...
	I1014 08:48:12.810741   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:48:14.860815   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:48:14.860815   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:14.860911   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:48:17.358192   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:48:17.359120   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:17.364419   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:48:17.365587   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.98.93 22 <nil> <nil>}
	I1014 08:48:17.365587   15224 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 08:48:17.513670   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 08:48:17.513802   15224 buildroot.go:166] provisioning hostname "multinode-671000-m02"
	I1014 08:48:17.513802   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:48:19.601895   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:48:19.602349   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:19.602521   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:48:22.082086   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:48:22.082188   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:22.088443   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:48:22.089169   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.98.93 22 <nil> <nil>}
	I1014 08:48:22.089169   15224 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-671000-m02 && echo "multinode-671000-m02" | sudo tee /etc/hostname
	I1014 08:48:22.268612   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-671000-m02
	
	I1014 08:48:22.268666   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:48:24.332882   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:48:24.333953   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:24.334091   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:48:26.831739   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:48:26.831739   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:26.838996   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:48:26.839156   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.98.93 22 <nil> <nil>}
	I1014 08:48:26.839156   15224 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-671000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-671000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-671000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 08:48:26.998901   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 08:48:26.998901   15224 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1014 08:48:26.999429   15224 buildroot.go:174] setting up certificates
	I1014 08:48:26.999523   15224 provision.go:84] configureAuth start
	I1014 08:48:26.999523   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:48:29.135569   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:48:29.136528   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:29.136614   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:48:31.694629   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:48:31.694629   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:31.695632   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:48:33.763168   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:48:33.763168   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:33.763168   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:48:36.228813   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:48:36.228997   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:36.229086   15224 provision.go:143] copyHostCerts
	I1014 08:48:36.229284   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1014 08:48:36.229284   15224 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1014 08:48:36.229284   15224 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1014 08:48:36.230106   15224 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1014 08:48:36.231513   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1014 08:48:36.231513   15224 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1014 08:48:36.231513   15224 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1014 08:48:36.232256   15224 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1014 08:48:36.232976   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1014 08:48:36.232976   15224 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1014 08:48:36.233510   15224 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1014 08:48:36.233701   15224 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I1014 08:48:36.235210   15224 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-671000-m02 san=[127.0.0.1 172.20.98.93 localhost minikube multinode-671000-m02]
	I1014 08:48:36.448347   15224 provision.go:177] copyRemoteCerts
	I1014 08:48:36.458837   15224 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 08:48:36.458837   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:48:38.495078   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:48:38.495078   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:38.495078   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:48:40.956097   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:48:40.956097   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:40.956829   15224 sshutil.go:53] new ssh client: &{IP:172.20.98.93 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000-m02\id_rsa Username:docker}
	I1014 08:48:41.073415   15224 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6145696s)
	I1014 08:48:41.073477   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1014 08:48:41.073550   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I1014 08:48:41.126083   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1014 08:48:41.126664   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 08:48:41.180628   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1014 08:48:41.181202   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 08:48:41.232346   15224 provision.go:87] duration metric: took 14.2327966s to configureAuth
	I1014 08:48:41.232346   15224 buildroot.go:189] setting minikube options for container-runtime
	I1014 08:48:41.233398   15224 config.go:182] Loaded profile config "multinode-671000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 08:48:41.233492   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:48:43.314059   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:48:43.314614   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:43.314614   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:48:45.784112   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:48:45.787503   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:45.792289   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:48:45.792289   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.98.93 22 <nil> <nil>}
	I1014 08:48:45.792289   15224 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1014 08:48:45.937059   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1014 08:48:45.937059   15224 buildroot.go:70] root file system type: tmpfs
	I1014 08:48:45.937312   15224 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1014 08:48:45.937312   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:48:48.024204   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:48:48.025031   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:48.025031   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:48:50.547062   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:48:50.547062   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:50.553187   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:48:50.554026   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.98.93 22 <nil> <nil>}
	I1014 08:48:50.554026   15224 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.20.106.123"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1014 08:48:50.726180   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.20.106.123
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1014 08:48:50.726334   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:48:52.810701   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:48:52.811129   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:52.811129   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:48:55.282514   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:48:55.282721   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:55.287507   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:48:55.288280   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.98.93 22 <nil> <nil>}
	I1014 08:48:55.288280   15224 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1014 08:48:57.647303   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1014 08:48:57.647413   15224 machine.go:96] duration metric: took 44.8366885s to provisionDockerMachine
	I1014 08:48:57.647486   15224 start.go:293] postStartSetup for "multinode-671000-m02" (driver="hyperv")
	I1014 08:48:57.647486   15224 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 08:48:57.659006   15224 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 08:48:57.659006   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:48:59.718197   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:48:59.718513   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:59.718625   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:49:02.162772   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:49:02.162772   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:49:02.162772   15224 sshutil.go:53] new ssh client: &{IP:172.20.98.93 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000-m02\id_rsa Username:docker}
	I1014 08:49:02.268225   15224 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.609128s)
	I1014 08:49:02.280986   15224 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 08:49:02.287775   15224 command_runner.go:130] > NAME=Buildroot
	I1014 08:49:02.287775   15224 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1014 08:49:02.287878   15224 command_runner.go:130] > ID=buildroot
	I1014 08:49:02.287878   15224 command_runner.go:130] > VERSION_ID=2023.02.9
	I1014 08:49:02.287878   15224 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1014 08:49:02.287960   15224 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 08:49:02.288032   15224 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1014 08:49:02.288449   15224 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1014 08:49:02.289395   15224 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> 9362.pem in /etc/ssl/certs
	I1014 08:49:02.289471   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /etc/ssl/certs/9362.pem
	I1014 08:49:02.299493   15224 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 08:49:02.318762   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /etc/ssl/certs/9362.pem (1708 bytes)
	I1014 08:49:02.369430   15224 start.go:296] duration metric: took 4.7219357s for postStartSetup
	I1014 08:49:02.369585   15224 fix.go:56] duration metric: took 1m27.2245073s for fixHost
	I1014 08:49:02.369690   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:49:04.451777   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:49:04.451777   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:49:04.451777   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:49:06.926197   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:49:06.926719   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:49:06.931668   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:49:06.931802   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.98.93 22 <nil> <nil>}
	I1014 08:49:06.931802   15224 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 08:49:07.067443   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728920947.067354943
	
	I1014 08:49:07.067443   15224 fix.go:216] guest clock: 1728920947.067354943
	I1014 08:49:07.067568   15224 fix.go:229] Guest: 2024-10-14 08:49:07.067354943 -0700 PDT Remote: 2024-10-14 08:49:02.3695854 -0700 PDT m=+295.072045601 (delta=4.697769543s)
	I1014 08:49:07.067568   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:49:09.200501   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:49:09.200501   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:49:09.200501   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:49:11.705026   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:49:11.705026   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:49:11.711643   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:49:11.711835   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.98.93 22 <nil> <nil>}
	I1014 08:49:11.711835   15224 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1728920947
	I1014 08:49:11.869653   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 14 15:49:07 UTC 2024
	
	I1014 08:49:11.869653   15224 fix.go:236] clock set: Mon Oct 14 15:49:07 UTC 2024
	 (err=<nil>)
	I1014 08:49:11.869653   15224 start.go:83] releasing machines lock for "multinode-671000-m02", held for 1m36.7247385s
	I1014 08:49:11.870308   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:49:13.957633   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:49:13.957727   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:49:13.958042   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:49:16.447721   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:49:16.447875   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:49:16.450419   15224 out.go:177] * Found network options:
	I1014 08:49:16.452580   15224 out.go:177]   - NO_PROXY=172.20.106.123
	W1014 08:49:16.455109   15224 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 08:49:16.458000   15224 out.go:177]   - NO_PROXY=172.20.106.123
	W1014 08:49:16.460379   15224 proxy.go:119] fail to check proxy env: Error ip not in block
	W1014 08:49:16.461192   15224 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 08:49:16.463479   15224 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1014 08:49:16.464095   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:49:16.474410   15224 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1014 08:49:16.475530   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:49:18.616922   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:49:18.617561   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:49:18.617704   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:49:18.649602   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:49:18.649602   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:49:18.649742   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:49:21.267672   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:49:21.267672   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:49:21.268606   15224 sshutil.go:53] new ssh client: &{IP:172.20.98.93 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000-m02\id_rsa Username:docker}
	I1014 08:49:21.294372   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:49:21.294372   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:49:21.295072   15224 sshutil.go:53] new ssh client: &{IP:172.20.98.93 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000-m02\id_rsa Username:docker}
	I1014 08:49:21.370611   15224 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I1014 08:49:21.371460   15224 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.895748s)
	W1014 08:49:21.371460   15224 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 08:49:21.383748   15224 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 08:49:21.388705   15224 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I1014 08:49:21.388705   15224 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9252169s)
	W1014 08:49:21.388705   15224 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1014 08:49:21.417424   15224 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1014 08:49:21.417566   15224 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 08:49:21.417566   15224 start.go:495] detecting cgroup driver to use...
	I1014 08:49:21.417992   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 08:49:21.457131   15224 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1014 08:49:21.468921   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1014 08:49:21.501488   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1014 08:49:21.501738   15224 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1014 08:49:21.501888   15224 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1014 08:49:21.526933   15224 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 08:49:21.537940   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 08:49:21.570189   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 08:49:21.604750   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 08:49:21.636378   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 08:49:21.666973   15224 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 08:49:21.699577   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 08:49:21.732318   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 08:49:21.763082   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 08:49:21.795815   15224 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 08:49:21.816435   15224 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 08:49:21.816704   15224 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 08:49:21.828042   15224 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 08:49:21.860032   15224 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 08:49:21.889832   15224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:49:22.097884   15224 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 08:49:22.134711   15224 start.go:495] detecting cgroup driver to use...
	I1014 08:49:22.147519   15224 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1014 08:49:22.172688   15224 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1014 08:49:22.172853   15224 command_runner.go:130] > [Unit]
	I1014 08:49:22.172853   15224 command_runner.go:130] > Description=Docker Application Container Engine
	I1014 08:49:22.172853   15224 command_runner.go:130] > Documentation=https://docs.docker.com
	I1014 08:49:22.172853   15224 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1014 08:49:22.172853   15224 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1014 08:49:22.172853   15224 command_runner.go:130] > StartLimitBurst=3
	I1014 08:49:22.172941   15224 command_runner.go:130] > StartLimitIntervalSec=60
	I1014 08:49:22.172941   15224 command_runner.go:130] > [Service]
	I1014 08:49:22.172980   15224 command_runner.go:130] > Type=notify
	I1014 08:49:22.172980   15224 command_runner.go:130] > Restart=on-failure
	I1014 08:49:22.172980   15224 command_runner.go:130] > Environment=NO_PROXY=172.20.106.123
	I1014 08:49:22.172980   15224 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1014 08:49:22.172980   15224 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1014 08:49:22.172980   15224 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1014 08:49:22.172980   15224 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1014 08:49:22.172980   15224 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1014 08:49:22.172980   15224 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1014 08:49:22.172980   15224 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1014 08:49:22.172980   15224 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1014 08:49:22.172980   15224 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1014 08:49:22.172980   15224 command_runner.go:130] > ExecStart=
	I1014 08:49:22.172980   15224 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1014 08:49:22.172980   15224 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1014 08:49:22.172980   15224 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1014 08:49:22.172980   15224 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1014 08:49:22.172980   15224 command_runner.go:130] > LimitNOFILE=infinity
	I1014 08:49:22.172980   15224 command_runner.go:130] > LimitNPROC=infinity
	I1014 08:49:22.172980   15224 command_runner.go:130] > LimitCORE=infinity
	I1014 08:49:22.172980   15224 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1014 08:49:22.172980   15224 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1014 08:49:22.172980   15224 command_runner.go:130] > TasksMax=infinity
	I1014 08:49:22.172980   15224 command_runner.go:130] > TimeoutStartSec=0
	I1014 08:49:22.172980   15224 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1014 08:49:22.172980   15224 command_runner.go:130] > Delegate=yes
	I1014 08:49:22.172980   15224 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1014 08:49:22.172980   15224 command_runner.go:130] > KillMode=process
	I1014 08:49:22.172980   15224 command_runner.go:130] > [Install]
	I1014 08:49:22.172980   15224 command_runner.go:130] > WantedBy=multi-user.target
	I1014 08:49:22.185260   15224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 08:49:22.218868   15224 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 08:49:22.262544   15224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 08:49:22.302232   15224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 08:49:22.342680   15224 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 08:49:22.409079   15224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 08:49:22.435020   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 08:49:22.471187   15224 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1014 08:49:22.485048   15224 ssh_runner.go:195] Run: which cri-dockerd
	I1014 08:49:22.492677   15224 command_runner.go:130] > /usr/bin/cri-dockerd
	I1014 08:49:22.507368   15224 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1014 08:49:22.526949   15224 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1014 08:49:22.569062   15224 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1014 08:49:22.771125   15224 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1014 08:49:22.958430   15224 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1014 08:49:22.958552   15224 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1014 08:49:23.003427   15224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:49:23.194136   15224 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 08:49:25.856429   15224 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6622885s)
	I1014 08:49:25.867684   15224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1014 08:49:25.901859   15224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 08:49:25.939885   15224 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1014 08:49:26.143412   15224 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1014 08:49:26.354688   15224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:49:26.559829   15224 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1014 08:49:26.603222   15224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 08:49:26.644145   15224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:49:26.861679   15224 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1014 08:49:26.972510   15224 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1014 08:49:26.984161   15224 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1014 08:49:26.993907   15224 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1014 08:49:26.993974   15224 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1014 08:49:26.993974   15224 command_runner.go:130] > Device: 0,22	Inode: 849         Links: 1
	I1014 08:49:26.993974   15224 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1014 08:49:26.993974   15224 command_runner.go:130] > Access: 2024-10-14 15:49:26.887253131 +0000
	I1014 08:49:26.993974   15224 command_runner.go:130] > Modify: 2024-10-14 15:49:26.887253131 +0000
	I1014 08:49:26.993974   15224 command_runner.go:130] > Change: 2024-10-14 15:49:26.890253139 +0000
	I1014 08:49:26.994063   15224 command_runner.go:130] >  Birth: -
	I1014 08:49:26.994063   15224 start.go:563] Will wait 60s for crictl version
	I1014 08:49:27.005213   15224 ssh_runner.go:195] Run: which crictl
	I1014 08:49:27.011904   15224 command_runner.go:130] > /usr/bin/crictl
	I1014 08:49:27.022689   15224 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 08:49:27.088510   15224 command_runner.go:130] > Version:  0.1.0
	I1014 08:49:27.089329   15224 command_runner.go:130] > RuntimeName:  docker
	I1014 08:49:27.089329   15224 command_runner.go:130] > RuntimeVersion:  27.3.1
	I1014 08:49:27.089444   15224 command_runner.go:130] > RuntimeApiVersion:  v1
	I1014 08:49:27.089444   15224 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1014 08:49:27.099805   15224 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 08:49:27.135610   15224 command_runner.go:130] > 27.3.1
	I1014 08:49:27.147639   15224 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 08:49:27.184703   15224 command_runner.go:130] > 27.3.1
	I1014 08:49:27.189196   15224 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1014 08:49:27.192727   15224 out.go:177]   - env NO_PROXY=172.20.106.123
	I1014 08:49:27.195474   15224 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1014 08:49:27.200175   15224 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1014 08:49:27.200175   15224 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1014 08:49:27.200175   15224 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1014 08:49:27.200175   15224 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:08:65:4d Flags:up|broadcast|multicast|running}
	I1014 08:49:27.203813   15224 ip.go:214] interface addr: fe80::b548:ff79:d464:75b8/64
	I1014 08:49:27.203813   15224 ip.go:214] interface addr: 172.20.96.1/20
	I1014 08:49:27.216037   15224 ssh_runner.go:195] Run: grep 172.20.96.1	host.minikube.internal$ /etc/hosts
	I1014 08:49:27.221597   15224 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 08:49:27.241964   15224 mustload.go:65] Loading cluster: multinode-671000
	I1014 08:49:27.242596   15224 config.go:182] Loaded profile config "multinode-671000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 08:49:27.243307   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:49:29.280867   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:49:29.281034   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:49:29.281034   15224 host.go:66] Checking if "multinode-671000" exists ...
	I1014 08:49:29.281682   15224 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000 for IP: 172.20.98.93
	I1014 08:49:29.281682   15224 certs.go:194] generating shared ca certs ...
	I1014 08:49:29.281765   15224 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 08:49:29.282412   15224 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I1014 08:49:29.282412   15224 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I1014 08:49:29.282412   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 08:49:29.283098   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1014 08:49:29.283098   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 08:49:29.283098   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 08:49:29.283837   15224 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem (1338 bytes)
	W1014 08:49:29.284029   15224 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936_empty.pem, impossibly tiny 0 bytes
	I1014 08:49:29.284156   15224 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1014 08:49:29.284504   15224 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1014 08:49:29.284504   15224 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1014 08:49:29.285036   15224 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1014 08:49:29.285239   15224 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem (1708 bytes)
	I1014 08:49:29.285955   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 08:49:29.286137   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem -> /usr/share/ca-certificates/936.pem
	I1014 08:49:29.286137   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /usr/share/ca-certificates/9362.pem
	I1014 08:49:29.286137   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 08:49:29.337430   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 08:49:29.383833   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 08:49:29.438248   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 08:49:29.486028   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 08:49:29.532209   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem --> /usr/share/ca-certificates/936.pem (1338 bytes)
	I1014 08:49:29.578861   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /usr/share/ca-certificates/9362.pem (1708 bytes)
	I1014 08:49:29.638595   15224 ssh_runner.go:195] Run: openssl version
	I1014 08:49:29.648558   15224 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1014 08:49:29.661385   15224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9362.pem && ln -fs /usr/share/ca-certificates/9362.pem /etc/ssl/certs/9362.pem"
	I1014 08:49:29.697094   15224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9362.pem
	I1014 08:49:29.705066   15224 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 14 14:00 /usr/share/ca-certificates/9362.pem
	I1014 08:49:29.705066   15224 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 14:00 /usr/share/ca-certificates/9362.pem
	I1014 08:49:29.717851   15224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9362.pem
	I1014 08:49:29.725980   15224 command_runner.go:130] > 3ec20f2e
	I1014 08:49:29.737673   15224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9362.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 08:49:29.768028   15224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 08:49:29.799670   15224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 08:49:29.808393   15224 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 14 13:44 /usr/share/ca-certificates/minikubeCA.pem
	I1014 08:49:29.808393   15224 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:44 /usr/share/ca-certificates/minikubeCA.pem
	I1014 08:49:29.820216   15224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 08:49:29.829712   15224 command_runner.go:130] > b5213941
	I1014 08:49:29.843328   15224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 08:49:29.877150   15224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936.pem && ln -fs /usr/share/ca-certificates/936.pem /etc/ssl/certs/936.pem"
	I1014 08:49:29.910960   15224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936.pem
	I1014 08:49:29.918146   15224 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 14 14:00 /usr/share/ca-certificates/936.pem
	I1014 08:49:29.918275   15224 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 14:00 /usr/share/ca-certificates/936.pem
	I1014 08:49:29.930357   15224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936.pem
	I1014 08:49:29.939713   15224 command_runner.go:130] > 51391683
	I1014 08:49:29.953152   15224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/936.pem /etc/ssl/certs/51391683.0"
	I1014 08:49:29.988633   15224 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 08:49:29.996061   15224 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 08:49:29.996061   15224 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 08:49:29.996061   15224 kubeadm.go:934] updating node {m02 172.20.98.93 8443 v1.31.1 docker false true} ...
	I1014 08:49:29.996596   15224 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-671000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.98.93
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 08:49:30.008648   15224 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 08:49:30.049887   15224 command_runner.go:130] > kubeadm
	I1014 08:49:30.049887   15224 command_runner.go:130] > kubectl
	I1014 08:49:30.049887   15224 command_runner.go:130] > kubelet
	I1014 08:49:30.049887   15224 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 08:49:30.067908   15224 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1014 08:49:30.109411   15224 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1014 08:49:30.149254   15224 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 08:49:30.192417   15224 ssh_runner.go:195] Run: grep 172.20.106.123	control-plane.minikube.internal$ /etc/hosts
	I1014 08:49:30.198430   15224 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.106.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 08:49:30.229076   15224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:49:30.429839   15224 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 08:49:30.459268   15224 host.go:66] Checking if "multinode-671000" exists ...
	I1014 08:49:30.460038   15224 start.go:317] joinCluster: &{Name:multinode-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.106.123 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.98.93 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.102.29 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false in
gress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 08:49:30.460038   15224 start.go:330] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.20.98.93 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1014 08:49:30.460038   15224 host.go:66] Checking if "multinode-671000-m02" exists ...
	I1014 08:49:30.460778   15224 mustload.go:65] Loading cluster: multinode-671000
	I1014 08:49:30.461477   15224 config.go:182] Loaded profile config "multinode-671000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 08:49:30.462235   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:49:32.581996   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:49:32.581996   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:49:32.581996   15224 host.go:66] Checking if "multinode-671000" exists ...
	I1014 08:49:32.582955   15224 api_server.go:166] Checking apiserver status ...
	I1014 08:49:32.594452   15224 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 08:49:32.594452   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:49:34.682724   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:49:34.682724   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:49:34.683409   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:49:37.153707   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:49:37.153900   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:49:37.153900   15224 sshutil.go:53] new ssh client: &{IP:172.20.106.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000\id_rsa Username:docker}
	I1014 08:49:37.261414   15224 command_runner.go:130] > 1906
	I1014 08:49:37.261482   15224 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.667021s)
	I1014 08:49:37.273215   15224 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1906/cgroup
	W1014 08:49:37.293737   15224 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1906/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1014 08:49:37.306060   15224 ssh_runner.go:195] Run: ls
	I1014 08:49:37.314736   15224 api_server.go:253] Checking apiserver healthz at https://172.20.106.123:8443/healthz ...
	I1014 08:49:37.323952   15224 api_server.go:279] https://172.20.106.123:8443/healthz returned 200:
	ok
	I1014 08:49:37.334192   15224 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl drain multinode-671000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I1014 08:49:37.504012   15224 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-rgbjf, kube-system/kube-proxy-kbpjf

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-671000" : exit status 1
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-671000
multinode_test.go:331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node list -p multinode-671000: context deadline exceeded (0s)
multinode_test.go:333: failed to run node list. args "out/minikube-windows-amd64.exe node list -p multinode-671000" : context deadline exceeded
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-671000	172.20.100.167
multinode-671000-m02	172.20.109.137
multinode-671000-m03	172.20.102.29

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-671000 -n multinode-671000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-671000 -n multinode-671000: (11.9942552s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 logs -n 25: (11.7216958s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                          Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| cp      | multinode-671000 cp testdata\cp-test.txt                                                                                | multinode-671000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:34 PDT | 14 Oct 24 08:34 PDT |
	|         | multinode-671000-m02:/home/docker/cp-test.txt                                                                           |                  |                   |         |                     |                     |
	| ssh     | multinode-671000 ssh -n                                                                                                 | multinode-671000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:34 PDT | 14 Oct 24 08:34 PDT |
	|         | multinode-671000-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-671000 cp multinode-671000-m02:/home/docker/cp-test.txt                                                       | multinode-671000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:34 PDT | 14 Oct 24 08:35 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile867123417\001\cp-test_multinode-671000-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-671000 ssh -n                                                                                                 | multinode-671000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:35 PDT | 14 Oct 24 08:35 PDT |
	|         | multinode-671000-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-671000 cp multinode-671000-m02:/home/docker/cp-test.txt                                                       | multinode-671000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:35 PDT | 14 Oct 24 08:35 PDT |
	|         | multinode-671000:/home/docker/cp-test_multinode-671000-m02_multinode-671000.txt                                         |                  |                   |         |                     |                     |
	| ssh     | multinode-671000 ssh -n                                                                                                 | multinode-671000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:35 PDT | 14 Oct 24 08:35 PDT |
	|         | multinode-671000-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-671000 ssh -n multinode-671000 sudo cat                                                                       | multinode-671000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:35 PDT | 14 Oct 24 08:35 PDT |
	|         | /home/docker/cp-test_multinode-671000-m02_multinode-671000.txt                                                          |                  |                   |         |                     |                     |
	| cp      | multinode-671000 cp multinode-671000-m02:/home/docker/cp-test.txt                                                       | multinode-671000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:35 PDT | 14 Oct 24 08:36 PDT |
	|         | multinode-671000-m03:/home/docker/cp-test_multinode-671000-m02_multinode-671000-m03.txt                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-671000 ssh -n                                                                                                 | multinode-671000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:36 PDT | 14 Oct 24 08:36 PDT |
	|         | multinode-671000-m02 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-671000 ssh -n multinode-671000-m03 sudo cat                                                                   | multinode-671000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:36 PDT | 14 Oct 24 08:36 PDT |
	|         | /home/docker/cp-test_multinode-671000-m02_multinode-671000-m03.txt                                                      |                  |                   |         |                     |                     |
	| cp      | multinode-671000 cp testdata\cp-test.txt                                                                                | multinode-671000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:36 PDT | 14 Oct 24 08:36 PDT |
	|         | multinode-671000-m03:/home/docker/cp-test.txt                                                                           |                  |                   |         |                     |                     |
	| ssh     | multinode-671000 ssh -n                                                                                                 | multinode-671000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:36 PDT | 14 Oct 24 08:36 PDT |
	|         | multinode-671000-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-671000 cp multinode-671000-m03:/home/docker/cp-test.txt                                                       | multinode-671000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:36 PDT | 14 Oct 24 08:36 PDT |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile867123417\001\cp-test_multinode-671000-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-671000 ssh -n                                                                                                 | multinode-671000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:36 PDT | 14 Oct 24 08:36 PDT |
	|         | multinode-671000-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| cp      | multinode-671000 cp multinode-671000-m03:/home/docker/cp-test.txt                                                       | multinode-671000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:36 PDT | 14 Oct 24 08:37 PDT |
	|         | multinode-671000:/home/docker/cp-test_multinode-671000-m03_multinode-671000.txt                                         |                  |                   |         |                     |                     |
	| ssh     | multinode-671000 ssh -n                                                                                                 | multinode-671000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:37 PDT | 14 Oct 24 08:37 PDT |
	|         | multinode-671000-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-671000 ssh -n multinode-671000 sudo cat                                                                       | multinode-671000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:37 PDT | 14 Oct 24 08:37 PDT |
	|         | /home/docker/cp-test_multinode-671000-m03_multinode-671000.txt                                                          |                  |                   |         |                     |                     |
	| cp      | multinode-671000 cp multinode-671000-m03:/home/docker/cp-test.txt                                                       | multinode-671000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:37 PDT | 14 Oct 24 08:37 PDT |
	|         | multinode-671000-m02:/home/docker/cp-test_multinode-671000-m03_multinode-671000-m02.txt                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-671000 ssh -n                                                                                                 | multinode-671000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:37 PDT | 14 Oct 24 08:37 PDT |
	|         | multinode-671000-m03 sudo cat                                                                                           |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                |                  |                   |         |                     |                     |
	| ssh     | multinode-671000 ssh -n multinode-671000-m02 sudo cat                                                                   | multinode-671000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:37 PDT | 14 Oct 24 08:38 PDT |
	|         | /home/docker/cp-test_multinode-671000-m03_multinode-671000-m02.txt                                                      |                  |                   |         |                     |                     |
	| node    | multinode-671000 node stop m03                                                                                          | multinode-671000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:38 PDT | 14 Oct 24 08:38 PDT |
	| node    | multinode-671000 node start                                                                                             | multinode-671000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:39 PDT | 14 Oct 24 08:41 PDT |
	|         | m03 -v=7 --alsologtostderr                                                                                              |                  |                   |         |                     |                     |
	| node    | list -p multinode-671000                                                                                                | multinode-671000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:42 PDT |                     |
	| stop    | -p multinode-671000                                                                                                     | multinode-671000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:42 PDT | 14 Oct 24 08:44 PDT |
	| start   | -p multinode-671000                                                                                                     | multinode-671000 | minikube1\jenkins | v1.34.0 | 14 Oct 24 08:44 PDT |                     |
	|         | --wait=true -v=8                                                                                                        |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                       |                  |                   |         |                     |                     |
	|---------|-------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 08:44:07
	Running on machine: minikube1
	Binary: Built with gc go1.23.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 08:44:07.389674   15224 out.go:345] Setting OutFile to fd 1804 ...
	I1014 08:44:07.390740   15224 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 08:44:07.390740   15224 out.go:358] Setting ErrFile to fd 972...
	I1014 08:44:07.390740   15224 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 08:44:07.415971   15224 out.go:352] Setting JSON to false
	I1014 08:44:07.420984   15224 start.go:129] hostinfo: {"hostname":"minikube1","uptime":106161,"bootTime":1728814485,"procs":201,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5011 Build 19045.5011","kernelVersion":"10.0.19045.5011 Build 19045.5011","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1014 08:44:07.421993   15224 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 08:44:07.534857   15224 out.go:177] * [multinode-671000] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	I1014 08:44:07.543901   15224 notify.go:220] Checking for updates...
	I1014 08:44:07.549086   15224 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 08:44:07.555416   15224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 08:44:07.585693   15224 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1014 08:44:07.605965   15224 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 08:44:07.620024   15224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 08:44:07.631387   15224 config.go:182] Loaded profile config "multinode-671000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 08:44:07.631791   15224 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 08:44:13.384132   15224 out.go:177] * Using the hyperv driver based on existing profile
	I1014 08:44:13.393772   15224 start.go:297] selected driver: hyperv
	I1014 08:44:13.393772   15224 start.go:901] validating driver "hyperv" against &{Name:multinode-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:multinode-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.100.167 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.109.137 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.102.29 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:fals
e ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 08:44:13.394206   15224 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 08:44:13.459167   15224 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 08:44:13.459337   15224 cni.go:84] Creating CNI manager for ""
	I1014 08:44:13.459337   15224 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1014 08:44:13.459620   15224 start.go:340] cluster config:
	{Name:multinode-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-671000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.100.167 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.109.137 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.102.29 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio
-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 08:44:13.459651   15224 iso.go:125] acquiring lock: {Name:mkb71635bcbccba29dce9048ad4cc71430a7e577 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 08:44:13.545479   15224 out.go:177] * Starting "multinode-671000" primary control-plane node in "multinode-671000" cluster
	I1014 08:44:13.553332   15224 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 08:44:13.553562   15224 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1014 08:44:13.553562   15224 cache.go:56] Caching tarball of preloaded images
	I1014 08:44:13.553562   15224 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1014 08:44:13.554293   15224 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 08:44:13.554511   15224 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\config.json ...
	I1014 08:44:13.557472   15224 start.go:360] acquireMachinesLock for multinode-671000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 08:44:13.557472   15224 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-671000"
	I1014 08:44:13.558046   15224 start.go:96] Skipping create...Using existing machine configuration
	I1014 08:44:13.558046   15224 fix.go:54] fixHost starting: 
	I1014 08:44:13.558870   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:44:16.261637   15224 main.go:141] libmachine: [stdout =====>] : Off
	
	I1014 08:44:16.261637   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:16.262646   15224 fix.go:112] recreateIfNeeded on multinode-671000: state=Stopped err=<nil>
	W1014 08:44:16.262646   15224 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 08:44:16.293998   15224 out.go:177] * Restarting existing hyperv VM for "multinode-671000" ...
	I1014 08:44:16.384669   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-671000
	I1014 08:44:19.629584   15224 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:44:19.629732   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:19.629732   15224 main.go:141] libmachine: Waiting for host to start...
	I1014 08:44:19.629732   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:44:21.853637   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:44:21.854494   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:21.854566   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:44:24.301745   15224 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:44:24.301745   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:25.302201   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:44:27.422612   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:44:27.422612   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:27.422924   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:44:29.871404   15224 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:44:29.872460   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:30.873287   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:44:33.011425   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:44:33.011631   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:33.011677   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:44:35.443734   15224 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:44:35.443734   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:36.444215   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:44:38.627293   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:44:38.627351   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:38.627351   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:44:41.124871   15224 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:44:41.125002   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:42.125974   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:44:44.316671   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:44:44.316852   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:44.316852   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:44:46.942427   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:44:46.942427   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:46.945696   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:44:49.011131   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:44:49.011131   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:49.011131   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:44:51.492027   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:44:51.492027   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:51.492559   15224 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\config.json ...
	I1014 08:44:51.495554   15224 machine.go:93] provisionDockerMachine start ...
	I1014 08:44:51.496082   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:44:53.557335   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:44:53.557425   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:53.557626   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:44:56.041063   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:44:56.041063   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:56.047492   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:44:56.048427   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.106.123 22 <nil> <nil>}
	I1014 08:44:56.048460   15224 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 08:44:56.177780   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 08:44:56.177921   15224 buildroot.go:166] provisioning hostname "multinode-671000"
	I1014 08:44:56.177921   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:44:58.222838   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:44:58.222838   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:44:58.222838   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:45:00.709338   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:45:00.709338   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:00.716168   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:45:00.716859   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.106.123 22 <nil> <nil>}
	I1014 08:45:00.716859   15224 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-671000 && echo "multinode-671000" | sudo tee /etc/hostname
	I1014 08:45:00.863452   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-671000
	
	I1014 08:45:00.863530   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:45:02.987244   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:45:02.987365   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:02.987487   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:45:05.466484   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:45:05.466661   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:05.472466   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:45:05.473098   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.106.123 22 <nil> <nil>}
	I1014 08:45:05.473192   15224 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-671000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-671000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-671000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 08:45:05.623017   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 08:45:05.623107   15224 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1014 08:45:05.623107   15224 buildroot.go:174] setting up certificates
	I1014 08:45:05.623229   15224 provision.go:84] configureAuth start
	I1014 08:45:05.623301   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:45:07.693415   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:45:07.694278   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:07.694379   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:45:10.221863   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:45:10.221920   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:10.221920   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:45:12.270483   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:45:12.270483   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:12.270483   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:45:14.731822   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:45:14.732454   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:14.732454   15224 provision.go:143] copyHostCerts
	I1014 08:45:14.732638   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1014 08:45:14.732869   15224 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1014 08:45:14.732869   15224 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1014 08:45:14.733484   15224 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1014 08:45:14.734974   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1014 08:45:14.735172   15224 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1014 08:45:14.735172   15224 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1014 08:45:14.735172   15224 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1014 08:45:14.736608   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1014 08:45:14.736608   15224 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1014 08:45:14.736608   15224 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1014 08:45:14.737527   15224 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I1014 08:45:14.738625   15224 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-671000 san=[127.0.0.1 172.20.106.123 localhost minikube multinode-671000]
	I1014 08:45:14.822439   15224 provision.go:177] copyRemoteCerts
	I1014 08:45:14.832452   15224 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 08:45:14.833292   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:45:16.858535   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:45:16.858594   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:16.858594   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:45:19.312599   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:45:19.312671   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:19.312744   15224 sshutil.go:53] new ssh client: &{IP:172.20.106.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000\id_rsa Username:docker}
	I1014 08:45:19.418940   15224 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5864803s)
	I1014 08:45:19.419024   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1014 08:45:19.421274   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I1014 08:45:19.467514   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1014 08:45:19.467514   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 08:45:19.512423   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1014 08:45:19.513692   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 08:45:19.558955   15224 provision.go:87] duration metric: took 13.9356349s to configureAuth
	I1014 08:45:19.559019   15224 buildroot.go:189] setting minikube options for container-runtime
	I1014 08:45:19.559648   15224 config.go:182] Loaded profile config "multinode-671000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 08:45:19.559648   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:45:21.637227   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:45:21.638017   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:21.638080   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:45:24.073085   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:45:24.073890   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:24.084887   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:45:24.085628   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.106.123 22 <nil> <nil>}
	I1014 08:45:24.085628   15224 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1014 08:45:24.216534   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1014 08:45:24.216643   15224 buildroot.go:70] root file system type: tmpfs
	I1014 08:45:24.216959   15224 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1014 08:45:24.217137   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:45:26.234454   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:45:26.234591   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:26.234591   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:45:28.733290   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:45:28.733290   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:28.739195   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:45:28.740129   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.106.123 22 <nil> <nil>}
	I1014 08:45:28.740206   15224 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1014 08:45:28.895049   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1014 08:45:28.895170   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:45:30.970482   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:45:30.971402   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:30.971551   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:45:33.392031   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:45:33.392353   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:33.399014   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:45:33.399224   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.106.123 22 <nil> <nil>}
	I1014 08:45:33.399224   15224 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1014 08:45:35.856287   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1014 08:45:35.856287   15224 machine.go:96] duration metric: took 44.3606533s to provisionDockerMachine
	I1014 08:45:35.856287   15224 start.go:293] postStartSetup for "multinode-671000" (driver="hyperv")
	I1014 08:45:35.856287   15224 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 08:45:35.866878   15224 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 08:45:35.866878   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:45:37.902871   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:45:37.902871   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:37.903376   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:45:40.389463   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:45:40.389539   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:40.389539   15224 sshutil.go:53] new ssh client: &{IP:172.20.106.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000\id_rsa Username:docker}
	I1014 08:45:40.498571   15224 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.631685s)
	I1014 08:45:40.512486   15224 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 08:45:40.520099   15224 command_runner.go:130] > NAME=Buildroot
	I1014 08:45:40.520099   15224 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1014 08:45:40.520099   15224 command_runner.go:130] > ID=buildroot
	I1014 08:45:40.520099   15224 command_runner.go:130] > VERSION_ID=2023.02.9
	I1014 08:45:40.520200   15224 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1014 08:45:40.520478   15224 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 08:45:40.520550   15224 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1014 08:45:40.521350   15224 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1014 08:45:40.521914   15224 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> 9362.pem in /etc/ssl/certs
	I1014 08:45:40.521914   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /etc/ssl/certs/9362.pem
	I1014 08:45:40.533476   15224 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 08:45:40.553303   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /etc/ssl/certs/9362.pem (1708 bytes)
	I1014 08:45:40.600329   15224 start.go:296] duration metric: took 4.7440338s for postStartSetup
	I1014 08:45:40.600329   15224 fix.go:56] duration metric: took 1m27.0421262s for fixHost
	I1014 08:45:40.600329   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:45:42.636618   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:45:42.636671   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:42.636714   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:45:45.078391   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:45:45.079558   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:45.084901   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:45:45.085524   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.106.123 22 <nil> <nil>}
	I1014 08:45:45.085524   15224 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 08:45:45.218652   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728920745.219040960
	
	I1014 08:45:45.218652   15224 fix.go:216] guest clock: 1728920745.219040960
	I1014 08:45:45.218652   15224 fix.go:229] Guest: 2024-10-14 08:45:45.21904096 -0700 PDT Remote: 2024-10-14 08:45:40.6003296 -0700 PDT m=+93.303151401 (delta=4.61871136s)
	I1014 08:45:45.218949   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:45:47.298917   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:45:47.298917   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:47.299813   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:45:49.728125   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:45:49.728826   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:49.734542   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:45:49.734623   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.106.123 22 <nil> <nil>}
	I1014 08:45:49.734623   15224 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1728920745
	I1014 08:45:49.881262   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 14 15:45:45 UTC 2024
	
	I1014 08:45:49.881352   15224 fix.go:236] clock set: Mon Oct 14 15:45:45 UTC 2024
	 (err=<nil>)
	I1014 08:45:49.881352   15224 start.go:83] releasing machines lock for "multinode-671000", held for 1m36.323176s
	I1014 08:45:49.881526   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:45:51.958259   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:45:51.958682   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:51.958682   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:45:54.416595   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:45:54.416595   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:54.421939   15224 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1014 08:45:54.422094   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:45:54.431567   15224 ssh_runner.go:195] Run: cat /version.json
	I1014 08:45:54.431567   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:45:56.596858   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:45:56.596858   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:56.596858   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:45:56.596858   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:56.597666   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:45:56.597773   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:45:59.164179   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:45:59.164179   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:59.164179   15224 sshutil.go:53] new ssh client: &{IP:172.20.106.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000\id_rsa Username:docker}
	I1014 08:45:59.181617   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:45:59.181940   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:45:59.182091   15224 sshutil.go:53] new ssh client: &{IP:172.20.106.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000\id_rsa Username:docker}
	I1014 08:45:59.250252   15224 command_runner.go:130] > {"iso_version": "v1.34.0-1728382514-19774", "kicbase_version": "v0.0.45-1728063813-19756", "minikube_version": "v1.34.0", "commit": "cf9f11c2b0369efc07a929c4a1fdb2b4b3c62ee9"}
	I1014 08:45:59.250402   15224 ssh_runner.go:235] Completed: cat /version.json: (4.8188261s)
	I1014 08:45:59.264323   15224 ssh_runner.go:195] Run: systemctl --version
	I1014 08:45:59.268396   15224 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I1014 08:45:59.268396   15224 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8464155s)
	W1014 08:45:59.268396   15224 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1014 08:45:59.272830   15224 command_runner.go:130] > systemd 252 (252)
	I1014 08:45:59.272830   15224 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1014 08:45:59.284720   15224 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1014 08:45:59.292625   15224 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1014 08:45:59.293751   15224 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 08:45:59.304084   15224 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 08:45:59.331817   15224 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1014 08:45:59.331817   15224 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 08:45:59.331975   15224 start.go:495] detecting cgroup driver to use...
	I1014 08:45:59.332269   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 08:45:59.368515   15224 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	W1014 08:45:59.375133   15224 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1014 08:45:59.375133   15224 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1014 08:45:59.380886   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1014 08:45:59.411692   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1014 08:45:59.430899   15224 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 08:45:59.441646   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 08:45:59.470900   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 08:45:59.504488   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 08:45:59.533997   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 08:45:59.565330   15224 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 08:45:59.598642   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 08:45:59.629725   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 08:45:59.657570   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 08:45:59.688012   15224 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 08:45:59.705351   15224 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 08:45:59.705351   15224 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 08:45:59.715896   15224 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 08:45:59.748369   15224 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 08:45:59.773568   15224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:45:59.965755   15224 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 08:46:00.003898   15224 start.go:495] detecting cgroup driver to use...
	I1014 08:46:00.015390   15224 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1014 08:46:00.047005   15224 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1014 08:46:00.047071   15224 command_runner.go:130] > [Unit]
	I1014 08:46:00.047071   15224 command_runner.go:130] > Description=Docker Application Container Engine
	I1014 08:46:00.047071   15224 command_runner.go:130] > Documentation=https://docs.docker.com
	I1014 08:46:00.047071   15224 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1014 08:46:00.047071   15224 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1014 08:46:00.047071   15224 command_runner.go:130] > StartLimitBurst=3
	I1014 08:46:00.047156   15224 command_runner.go:130] > StartLimitIntervalSec=60
	I1014 08:46:00.047156   15224 command_runner.go:130] > [Service]
	I1014 08:46:00.047156   15224 command_runner.go:130] > Type=notify
	I1014 08:46:00.047156   15224 command_runner.go:130] > Restart=on-failure
	I1014 08:46:00.047156   15224 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1014 08:46:00.047156   15224 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1014 08:46:00.047241   15224 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1014 08:46:00.047241   15224 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1014 08:46:00.047241   15224 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1014 08:46:00.047241   15224 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1014 08:46:00.047241   15224 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1014 08:46:00.047351   15224 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1014 08:46:00.047415   15224 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1014 08:46:00.047415   15224 command_runner.go:130] > ExecStart=
	I1014 08:46:00.047467   15224 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1014 08:46:00.047545   15224 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1014 08:46:00.047583   15224 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1014 08:46:00.047583   15224 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1014 08:46:00.047583   15224 command_runner.go:130] > LimitNOFILE=infinity
	I1014 08:46:00.047583   15224 command_runner.go:130] > LimitNPROC=infinity
	I1014 08:46:00.047583   15224 command_runner.go:130] > LimitCORE=infinity
	I1014 08:46:00.047583   15224 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1014 08:46:00.047583   15224 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1014 08:46:00.047687   15224 command_runner.go:130] > TasksMax=infinity
	I1014 08:46:00.047717   15224 command_runner.go:130] > TimeoutStartSec=0
	I1014 08:46:00.047757   15224 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1014 08:46:00.047757   15224 command_runner.go:130] > Delegate=yes
	I1014 08:46:00.047757   15224 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1014 08:46:00.047757   15224 command_runner.go:130] > KillMode=process
	I1014 08:46:00.047757   15224 command_runner.go:130] > [Install]
	I1014 08:46:00.047844   15224 command_runner.go:130] > WantedBy=multi-user.target
	I1014 08:46:00.060088   15224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 08:46:00.091459   15224 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 08:46:00.136449   15224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 08:46:00.169625   15224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 08:46:00.202233   15224 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 08:46:00.263360   15224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 08:46:00.286997   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 08:46:00.317875   15224 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1014 08:46:00.327743   15224 ssh_runner.go:195] Run: which cri-dockerd
	I1014 08:46:00.333762   15224 command_runner.go:130] > /usr/bin/cri-dockerd
	I1014 08:46:00.345178   15224 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1014 08:46:00.365900   15224 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1014 08:46:00.403545   15224 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1014 08:46:00.603475   15224 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1014 08:46:00.793419   15224 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1014 08:46:00.793941   15224 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1014 08:46:00.836113   15224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:46:01.022899   15224 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 08:46:03.696947   15224 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6739455s)
	I1014 08:46:03.710831   15224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1014 08:46:03.744741   15224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 08:46:03.778138   15224 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1014 08:46:03.967436   15224 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1014 08:46:04.177295   15224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:46:04.380206   15224 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1014 08:46:04.426934   15224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 08:46:04.463406   15224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:46:04.662791   15224 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1014 08:46:04.769183   15224 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1014 08:46:04.779438   15224 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1014 08:46:04.790442   15224 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1014 08:46:04.790537   15224 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1014 08:46:04.790537   15224 command_runner.go:130] > Device: 0,22	Inode: 845         Links: 1
	I1014 08:46:04.790537   15224 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1014 08:46:04.790623   15224 command_runner.go:130] > Access: 2024-10-14 15:46:04.687166886 +0000
	I1014 08:46:04.790623   15224 command_runner.go:130] > Modify: 2024-10-14 15:46:04.687166886 +0000
	I1014 08:46:04.790623   15224 command_runner.go:130] > Change: 2024-10-14 15:46:04.692166888 +0000
	I1014 08:46:04.790623   15224 command_runner.go:130] >  Birth: -
	I1014 08:46:04.790623   15224 start.go:563] Will wait 60s for crictl version
	I1014 08:46:04.805088   15224 ssh_runner.go:195] Run: which crictl
	I1014 08:46:04.812980   15224 command_runner.go:130] > /usr/bin/crictl
	I1014 08:46:04.827838   15224 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 08:46:04.885551   15224 command_runner.go:130] > Version:  0.1.0
	I1014 08:46:04.885618   15224 command_runner.go:130] > RuntimeName:  docker
	I1014 08:46:04.885729   15224 command_runner.go:130] > RuntimeVersion:  27.3.1
	I1014 08:46:04.885729   15224 command_runner.go:130] > RuntimeApiVersion:  v1
	I1014 08:46:04.885793   15224 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1014 08:46:04.893380   15224 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 08:46:04.924622   15224 command_runner.go:130] > 27.3.1
	I1014 08:46:04.936682   15224 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 08:46:04.964825   15224 command_runner.go:130] > 27.3.1
	I1014 08:46:04.970480   15224 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1014 08:46:04.970606   15224 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1014 08:46:04.975359   15224 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1014 08:46:04.975359   15224 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1014 08:46:04.975359   15224 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1014 08:46:04.975663   15224 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:08:65:4d Flags:up|broadcast|multicast|running}
	I1014 08:46:04.978430   15224 ip.go:214] interface addr: fe80::b548:ff79:d464:75b8/64
	I1014 08:46:04.978430   15224 ip.go:214] interface addr: 172.20.96.1/20
	I1014 08:46:04.987521   15224 ssh_runner.go:195] Run: grep 172.20.96.1	host.minikube.internal$ /etc/hosts
	I1014 08:46:04.993528   15224 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 08:46:05.014457   15224 kubeadm.go:883] updating cluster {Name:multinode-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.1 ClusterName:multinode-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.106.123 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.109.137 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.102.29 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 08:46:05.014457   15224 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 08:46:05.024919   15224 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 08:46:05.053890   15224 command_runner.go:130] > kindest/kindnetd:v20241007-36f62932
	I1014 08:46:05.053890   15224 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I1014 08:46:05.053890   15224 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I1014 08:46:05.054023   15224 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 08:46:05.054023   15224 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I1014 08:46:05.054023   15224 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I1014 08:46:05.054023   15224 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I1014 08:46:05.054023   15224 command_runner.go:130] > registry.k8s.io/pause:3.10
	I1014 08:46:05.054023   15224 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 08:46:05.054023   15224 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1014 08:46:05.054187   15224 docker.go:689] Got preloaded images: -- stdout --
	kindest/kindnetd:v20241007-36f62932
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1014 08:46:05.054210   15224 docker.go:619] Images already preloaded, skipping extraction
	I1014 08:46:05.067803   15224 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1014 08:46:05.095328   15224 command_runner.go:130] > kindest/kindnetd:v20241007-36f62932
	I1014 08:46:05.095417   15224 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.31.1
	I1014 08:46:05.095417   15224 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.31.1
	I1014 08:46:05.095417   15224 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 08:46:05.095417   15224 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.31.1
	I1014 08:46:05.095491   15224 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I1014 08:46:05.095491   15224 command_runner.go:130] > registry.k8s.io/etcd:3.5.15-0
	I1014 08:46:05.095491   15224 command_runner.go:130] > registry.k8s.io/pause:3.10
	I1014 08:46:05.095491   15224 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 08:46:05.095491   15224 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1014 08:46:05.095611   15224 docker.go:689] Got preloaded images: -- stdout --
	kindest/kindnetd:v20241007-36f62932
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1014 08:46:05.095683   15224 cache_images.go:84] Images are preloaded, skipping loading
	I1014 08:46:05.095753   15224 kubeadm.go:934] updating node { 172.20.106.123 8443 v1.31.1 docker true true} ...
	I1014 08:46:05.096021   15224 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-671000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.106.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 08:46:05.105582   15224 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1014 08:46:05.173403   15224 command_runner.go:130] > cgroupfs
	I1014 08:46:05.173658   15224 cni.go:84] Creating CNI manager for ""
	I1014 08:46:05.173728   15224 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1014 08:46:05.173853   15224 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 08:46:05.173929   15224 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.20.106.123 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-671000 NodeName:multinode-671000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.20.106.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.20.106.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 08:46:05.174405   15224 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.20.106.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-671000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.20.106.123"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.20.106.123"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 08:46:05.187718   15224 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 08:46:05.208520   15224 command_runner.go:130] > kubeadm
	I1014 08:46:05.209540   15224 command_runner.go:130] > kubectl
	I1014 08:46:05.209540   15224 command_runner.go:130] > kubelet
	I1014 08:46:05.209540   15224 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 08:46:05.221947   15224 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 08:46:05.238933   15224 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1014 08:46:05.269892   15224 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 08:46:05.304444   15224 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2300 bytes)
	I1014 08:46:05.350507   15224 ssh_runner.go:195] Run: grep 172.20.106.123	control-plane.minikube.internal$ /etc/hosts
	I1014 08:46:05.357197   15224 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.106.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 08:46:05.395114   15224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:46:05.594775   15224 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 08:46:05.622076   15224 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000 for IP: 172.20.106.123
	I1014 08:46:05.622269   15224 certs.go:194] generating shared ca certs ...
	I1014 08:46:05.622335   15224 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 08:46:05.623386   15224 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I1014 08:46:05.623972   15224 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I1014 08:46:05.623972   15224 certs.go:256] generating profile certs ...
	I1014 08:46:05.623972   15224 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\client.key
	I1014 08:46:05.625153   15224 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.key.e9b19ac9
	I1014 08:46:05.625279   15224 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.crt.e9b19ac9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.20.106.123]
	I1014 08:46:05.684226   15224 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.crt.e9b19ac9 ...
	I1014 08:46:05.684226   15224 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.crt.e9b19ac9: {Name:mk3795177dce49c783f9ee27d09e16b869d515a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 08:46:05.686235   15224 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.key.e9b19ac9 ...
	I1014 08:46:05.686235   15224 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.key.e9b19ac9: {Name:mkf4893f04bf939f2cb6f963f84b6c5956474043 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 08:46:05.686920   15224 certs.go:381] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.crt.e9b19ac9 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.crt
	I1014 08:46:05.704929   15224 certs.go:385] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.key.e9b19ac9 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.key
	I1014 08:46:05.706523   15224 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\proxy-client.key
	I1014 08:46:05.706644   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 08:46:05.706899   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1014 08:46:05.707047   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 08:46:05.707047   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 08:46:05.707047   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 08:46:05.707588   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 08:46:05.707828   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 08:46:05.707989   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 08:46:05.708208   15224 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem (1338 bytes)
	W1014 08:46:05.709086   15224 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936_empty.pem, impossibly tiny 0 bytes
	I1014 08:46:05.709214   15224 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1014 08:46:05.709214   15224 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1014 08:46:05.710051   15224 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1014 08:46:05.710359   15224 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1014 08:46:05.710530   15224 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem (1708 bytes)
	I1014 08:46:05.711260   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /usr/share/ca-certificates/9362.pem
	I1014 08:46:05.711299   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 08:46:05.711299   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem -> /usr/share/ca-certificates/936.pem
	I1014 08:46:05.712863   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 08:46:05.771770   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 08:46:05.822670   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 08:46:05.877047   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 08:46:05.938655   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 08:46:05.991183   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 08:46:06.043812   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 08:46:06.095230   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 08:46:06.145209   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /usr/share/ca-certificates/9362.pem (1708 bytes)
	I1014 08:46:06.192720   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 08:46:06.236591   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem --> /usr/share/ca-certificates/936.pem (1338 bytes)
	I1014 08:46:06.281178   15224 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 08:46:06.323802   15224 ssh_runner.go:195] Run: openssl version
	I1014 08:46:06.332179   15224 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1014 08:46:06.344790   15224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9362.pem && ln -fs /usr/share/ca-certificates/9362.pem /etc/ssl/certs/9362.pem"
	I1014 08:46:06.379138   15224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9362.pem
	I1014 08:46:06.386330   15224 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 14 14:00 /usr/share/ca-certificates/9362.pem
	I1014 08:46:06.386330   15224 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 14:00 /usr/share/ca-certificates/9362.pem
	I1014 08:46:06.397806   15224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9362.pem
	I1014 08:46:06.406914   15224 command_runner.go:130] > 3ec20f2e
	I1014 08:46:06.421441   15224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9362.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 08:46:06.452745   15224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 08:46:06.487518   15224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 08:46:06.495302   15224 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 14 13:44 /usr/share/ca-certificates/minikubeCA.pem
	I1014 08:46:06.495302   15224 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:44 /usr/share/ca-certificates/minikubeCA.pem
	I1014 08:46:06.505374   15224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 08:46:06.514465   15224 command_runner.go:130] > b5213941
	I1014 08:46:06.526108   15224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 08:46:06.554095   15224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936.pem && ln -fs /usr/share/ca-certificates/936.pem /etc/ssl/certs/936.pem"
	I1014 08:46:06.585235   15224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936.pem
	I1014 08:46:06.591189   15224 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 14 14:00 /usr/share/ca-certificates/936.pem
	I1014 08:46:06.591396   15224 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 14:00 /usr/share/ca-certificates/936.pem
	I1014 08:46:06.605730   15224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936.pem
	I1014 08:46:06.614146   15224 command_runner.go:130] > 51391683
	I1014 08:46:06.624214   15224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/936.pem /etc/ssl/certs/51391683.0"
	I1014 08:46:06.653769   15224 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 08:46:06.662087   15224 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 08:46:06.662180   15224 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1014 08:46:06.662180   15224 command_runner.go:130] > Device: 8,1	Inode: 5241127     Links: 1
	I1014 08:46:06.662180   15224 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1014 08:46:06.662180   15224 command_runner.go:130] > Access: 2024-10-14 15:22:28.423226154 +0000
	I1014 08:46:06.662180   15224 command_runner.go:130] > Modify: 2024-10-14 15:22:28.423226154 +0000
	I1014 08:46:06.662180   15224 command_runner.go:130] > Change: 2024-10-14 15:22:28.423226154 +0000
	I1014 08:46:06.662180   15224 command_runner.go:130] >  Birth: 2024-10-14 15:22:28.423226154 +0000
	I1014 08:46:06.673532   15224 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 08:46:06.685770   15224 command_runner.go:130] > Certificate will not expire
	I1014 08:46:06.696628   15224 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 08:46:06.705667   15224 command_runner.go:130] > Certificate will not expire
	I1014 08:46:06.716957   15224 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 08:46:06.727193   15224 command_runner.go:130] > Certificate will not expire
	I1014 08:46:06.740400   15224 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 08:46:06.750259   15224 command_runner.go:130] > Certificate will not expire
	I1014 08:46:06.762908   15224 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 08:46:06.773153   15224 command_runner.go:130] > Certificate will not expire
	I1014 08:46:06.782909   15224 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 08:46:06.792933   15224 command_runner.go:130] > Certificate will not expire
	I1014 08:46:06.793304   15224 kubeadm.go:392] StartCluster: {Name:multinode-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.1 ClusterName:multinode-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.106.123 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.109.137 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.102.29 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 08:46:06.802728   15224 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1014 08:46:06.838744   15224 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 08:46:06.861625   15224 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1014 08:46:06.861625   15224 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1014 08:46:06.861625   15224 command_runner.go:130] > /var/lib/minikube/etcd:
	I1014 08:46:06.861625   15224 command_runner.go:130] > member
	I1014 08:46:06.861766   15224 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 08:46:06.861766   15224 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 08:46:06.874073   15224 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 08:46:06.895878   15224 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 08:46:06.897075   15224 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-671000" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 08:46:06.897375   15224 kubeconfig.go:62] C:\Users\jenkins.minikube1\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-671000" cluster setting kubeconfig missing "multinode-671000" context setting]
	I1014 08:46:06.898017   15224 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 08:46:06.913804   15224 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 08:46:06.915160   15224 kapi.go:59] client config for multinode-671000: &rest.Config{Host:"https://172.20.106.123:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-671000/client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-671000/client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2926ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 08:46:06.916608   15224 cert_rotation.go:140] Starting client certificate rotation controller
	I1014 08:46:06.927856   15224 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 08:46:06.948546   15224 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I1014 08:46:06.948604   15224 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I1014 08:46:06.948604   15224 command_runner.go:130] > @@ -1,7 +1,7 @@
	I1014 08:46:06.948604   15224 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta4
	I1014 08:46:06.948604   15224 command_runner.go:130] >  kind: InitConfiguration
	I1014 08:46:06.948604   15224 command_runner.go:130] >  localAPIEndpoint:
	I1014 08:46:06.948604   15224 command_runner.go:130] > -  advertiseAddress: 172.20.100.167
	I1014 08:46:06.948604   15224 command_runner.go:130] > +  advertiseAddress: 172.20.106.123
	I1014 08:46:06.948604   15224 command_runner.go:130] >    bindPort: 8443
	I1014 08:46:06.948604   15224 command_runner.go:130] >  bootstrapTokens:
	I1014 08:46:06.948604   15224 command_runner.go:130] >    - groups:
	I1014 08:46:06.948604   15224 command_runner.go:130] > @@ -15,13 +15,13 @@
	I1014 08:46:06.948604   15224 command_runner.go:130] >    name: "multinode-671000"
	I1014 08:46:06.948604   15224 command_runner.go:130] >    kubeletExtraArgs:
	I1014 08:46:06.948604   15224 command_runner.go:130] >      - name: "node-ip"
	I1014 08:46:06.948604   15224 command_runner.go:130] > -      value: "172.20.100.167"
	I1014 08:46:06.948604   15224 command_runner.go:130] > +      value: "172.20.106.123"
	I1014 08:46:06.948604   15224 command_runner.go:130] >    taints: []
	I1014 08:46:06.948604   15224 command_runner.go:130] >  ---
	I1014 08:46:06.948604   15224 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta4
	I1014 08:46:06.948604   15224 command_runner.go:130] >  kind: ClusterConfiguration
	I1014 08:46:06.948604   15224 command_runner.go:130] >  apiServer:
	I1014 08:46:06.948604   15224 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.20.100.167"]
	I1014 08:46:06.948604   15224 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.20.106.123"]
	I1014 08:46:06.948604   15224 command_runner.go:130] >    extraArgs:
	I1014 08:46:06.948604   15224 command_runner.go:130] >      - name: "enable-admission-plugins"
	I1014 08:46:06.948604   15224 command_runner.go:130] >        value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I1014 08:46:06.948604   15224 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.20.100.167
	+  advertiseAddress: 172.20.106.123
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -15,13 +15,13 @@
	   name: "multinode-671000"
	   kubeletExtraArgs:
	     - name: "node-ip"
	-      value: "172.20.100.167"
	+      value: "172.20.106.123"
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.20.100.167"]
	+  certSANs: ["127.0.0.1", "localhost", "172.20.106.123"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	       value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	
	-- /stdout --
	I1014 08:46:06.948604   15224 kubeadm.go:1160] stopping kube-system containers ...
	I1014 08:46:06.957610   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1014 08:46:06.986322   15224 command_runner.go:130] > d9831e9f8ce8
	I1014 08:46:06.986322   15224 command_runner.go:130] > 3d8b7bae48a5
	I1014 08:46:06.986322   15224 command_runner.go:130] > 2f8cc9a218fe
	I1014 08:46:06.986322   15224 command_runner.go:130] > 1863de70f231
	I1014 08:46:06.986322   15224 command_runner.go:130] > fcdf89a3ac8c
	I1014 08:46:06.986322   15224 command_runner.go:130] > ea19428d7036
	I1014 08:46:06.986322   15224 command_runner.go:130] > 7144d8ce208c
	I1014 08:46:06.986322   15224 command_runner.go:130] > 5e48ddcfdf90
	I1014 08:46:06.986322   15224 command_runner.go:130] > 661e75bbf6b4
	I1014 08:46:06.986322   15224 command_runner.go:130] > 712aad669c9f
	I1014 08:46:06.986322   15224 command_runner.go:130] > 1ba3cd8bbd59
	I1014 08:46:06.986322   15224 command_runner.go:130] > 0b5a6e440d7b
	I1014 08:46:06.986322   15224 command_runner.go:130] > bfdde08319e3
	I1014 08:46:06.986322   15224 command_runner.go:130] > 2c6be2bd1889
	I1014 08:46:06.986322   15224 command_runner.go:130] > 2dc78387553f
	I1014 08:46:06.986322   15224 command_runner.go:130] > d5733d27d2f1
	I1014 08:46:06.986322   15224 docker.go:483] Stopping containers: [d9831e9f8ce8 3d8b7bae48a5 2f8cc9a218fe 1863de70f231 fcdf89a3ac8c ea19428d7036 7144d8ce208c 5e48ddcfdf90 661e75bbf6b4 712aad669c9f 1ba3cd8bbd59 0b5a6e440d7b bfdde08319e3 2c6be2bd1889 2dc78387553f d5733d27d2f1]
	I1014 08:46:06.996408   15224 ssh_runner.go:195] Run: docker stop d9831e9f8ce8 3d8b7bae48a5 2f8cc9a218fe 1863de70f231 fcdf89a3ac8c ea19428d7036 7144d8ce208c 5e48ddcfdf90 661e75bbf6b4 712aad669c9f 1ba3cd8bbd59 0b5a6e440d7b bfdde08319e3 2c6be2bd1889 2dc78387553f d5733d27d2f1
	I1014 08:46:07.026833   15224 command_runner.go:130] > d9831e9f8ce8
	I1014 08:46:07.026833   15224 command_runner.go:130] > 3d8b7bae48a5
	I1014 08:46:07.026833   15224 command_runner.go:130] > 2f8cc9a218fe
	I1014 08:46:07.026833   15224 command_runner.go:130] > 1863de70f231
	I1014 08:46:07.026833   15224 command_runner.go:130] > fcdf89a3ac8c
	I1014 08:46:07.026933   15224 command_runner.go:130] > ea19428d7036
	I1014 08:46:07.026933   15224 command_runner.go:130] > 7144d8ce208c
	I1014 08:46:07.026933   15224 command_runner.go:130] > 5e48ddcfdf90
	I1014 08:46:07.026933   15224 command_runner.go:130] > 661e75bbf6b4
	I1014 08:46:07.026933   15224 command_runner.go:130] > 712aad669c9f
	I1014 08:46:07.026933   15224 command_runner.go:130] > 1ba3cd8bbd59
	I1014 08:46:07.027021   15224 command_runner.go:130] > 0b5a6e440d7b
	I1014 08:46:07.027021   15224 command_runner.go:130] > bfdde08319e3
	I1014 08:46:07.027021   15224 command_runner.go:130] > 2c6be2bd1889
	I1014 08:46:07.027021   15224 command_runner.go:130] > 2dc78387553f
	I1014 08:46:07.027021   15224 command_runner.go:130] > d5733d27d2f1
	I1014 08:46:07.037793   15224 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 08:46:07.079785   15224 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 08:46:07.098779   15224 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1014 08:46:07.099497   15224 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1014 08:46:07.099497   15224 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1014 08:46:07.099497   15224 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 08:46:07.099729   15224 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 08:46:07.099790   15224 kubeadm.go:157] found existing configuration files:
	
	I1014 08:46:07.109597   15224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 08:46:07.130667   15224 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 08:46:07.130667   15224 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 08:46:07.141593   15224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 08:46:07.175279   15224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 08:46:07.193122   15224 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 08:46:07.193185   15224 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 08:46:07.203545   15224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 08:46:07.232530   15224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 08:46:07.251543   15224 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 08:46:07.252513   15224 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 08:46:07.268173   15224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 08:46:07.297176   15224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 08:46:07.315189   15224 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 08:46:07.315189   15224 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 08:46:07.325210   15224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 08:46:07.355828   15224 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 08:46:07.375817   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 08:46:07.643991   15224 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 08:46:07.644109   15224 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1014 08:46:07.644109   15224 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1014 08:46:07.644109   15224 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 08:46:07.644109   15224 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1014 08:46:07.644109   15224 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1014 08:46:07.644109   15224 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1014 08:46:07.644109   15224 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1014 08:46:07.644243   15224 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1014 08:46:07.644243   15224 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 08:46:07.644243   15224 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 08:46:07.644243   15224 command_runner.go:130] > [certs] Using the existing "sa" key
	I1014 08:46:07.644353   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 08:46:07.716463   15224 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 08:46:07.872494   15224 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 08:46:08.266961   15224 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 08:46:08.469570   15224 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 08:46:08.690796   15224 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 08:46:09.250375   15224 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 08:46:09.259445   15224 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.6150891s)
	I1014 08:46:09.259445   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 08:46:09.608189   15224 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 08:46:09.608251   15224 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 08:46:09.608320   15224 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1014 08:46:09.608320   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 08:46:09.698134   15224 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 08:46:09.698243   15224 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 08:46:09.698243   15224 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 08:46:09.698243   15224 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 08:46:09.698243   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 08:46:09.805890   15224 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 08:46:09.805965   15224 api_server.go:52] waiting for apiserver process to appear ...
	I1014 08:46:09.817282   15224 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 08:46:10.319264   15224 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 08:46:10.817293   15224 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 08:46:11.317276   15224 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 08:46:11.816884   15224 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 08:46:11.845322   15224 command_runner.go:130] > 1906
	I1014 08:46:11.845458   15224 api_server.go:72] duration metric: took 2.0394893s to wait for apiserver process to appear ...
	I1014 08:46:11.845458   15224 api_server.go:88] waiting for apiserver healthz status ...
	I1014 08:46:11.845527   15224 api_server.go:253] Checking apiserver healthz at https://172.20.106.123:8443/healthz ...
	I1014 08:46:15.106193   15224 api_server.go:279] https://172.20.106.123:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 08:46:15.106276   15224 api_server.go:103] status: https://172.20.106.123:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 08:46:15.106276   15224 api_server.go:253] Checking apiserver healthz at https://172.20.106.123:8443/healthz ...
	I1014 08:46:15.196155   15224 api_server.go:279] https://172.20.106.123:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 08:46:15.196224   15224 api_server.go:103] status: https://172.20.106.123:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 08:46:15.346360   15224 api_server.go:253] Checking apiserver healthz at https://172.20.106.123:8443/healthz ...
	I1014 08:46:15.353345   15224 api_server.go:279] https://172.20.106.123:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 08:46:15.353345   15224 api_server.go:103] status: https://172.20.106.123:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 08:46:15.845536   15224 api_server.go:253] Checking apiserver healthz at https://172.20.106.123:8443/healthz ...
	I1014 08:46:15.859623   15224 api_server.go:279] https://172.20.106.123:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 08:46:15.859997   15224 api_server.go:103] status: https://172.20.106.123:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 08:46:16.346035   15224 api_server.go:253] Checking apiserver healthz at https://172.20.106.123:8443/healthz ...
	I1014 08:46:16.357230   15224 api_server.go:279] https://172.20.106.123:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 08:46:16.357230   15224 api_server.go:103] status: https://172.20.106.123:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 08:46:16.846350   15224 api_server.go:253] Checking apiserver healthz at https://172.20.106.123:8443/healthz ...
	I1014 08:46:16.854581   15224 api_server.go:279] https://172.20.106.123:8443/healthz returned 200:
	ok
	I1014 08:46:16.855051   15224 round_trippers.go:463] GET https://172.20.106.123:8443/version
	I1014 08:46:16.855051   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:16.855051   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:16.855051   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:16.866797   15224 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1014 08:46:16.866797   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:16.866797   15224 round_trippers.go:580]     Content-Length: 263
	I1014 08:46:16.866797   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:16 GMT
	I1014 08:46:16.866797   15224 round_trippers.go:580]     Audit-Id: db161d46-6ae8-4777-adaa-6abd4fa6219b
	I1014 08:46:16.866797   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:16.866797   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:16.866797   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:16.866797   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:16.866797   15224 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1014 08:46:16.866951   15224 api_server.go:141] control plane version: v1.31.1
	I1014 08:46:16.866951   15224 api_server.go:131] duration metric: took 5.0214846s to wait for apiserver health ...
	I1014 08:46:16.866951   15224 cni.go:84] Creating CNI manager for ""
	I1014 08:46:16.866951   15224 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1014 08:46:16.869379   15224 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1014 08:46:16.884269   15224 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 08:46:16.893465   15224 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1014 08:46:16.893506   15224 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I1014 08:46:16.893536   15224 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I1014 08:46:16.893536   15224 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1014 08:46:16.893536   15224 command_runner.go:130] > Access: 2024-10-14 15:44:46.012884200 +0000
	I1014 08:46:16.893536   15224 command_runner.go:130] > Modify: 2024-10-08 16:10:48.000000000 +0000
	I1014 08:46:16.893536   15224 command_runner.go:130] > Change: 2024-10-14 08:44:37.118000000 +0000
	I1014 08:46:16.893536   15224 command_runner.go:130] >  Birth: -
	I1014 08:46:16.893536   15224 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1014 08:46:16.893536   15224 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1014 08:46:16.968682   15224 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1014 08:46:18.237229   15224 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1014 08:46:18.237298   15224 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1014 08:46:18.237298   15224 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1014 08:46:18.237335   15224 command_runner.go:130] > daemonset.apps/kindnet configured
	I1014 08:46:18.237335   15224 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.2686513s)
	I1014 08:46:18.237467   15224 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 08:46:18.237500   15224 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 08:46:18.237500   15224 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 08:46:18.237500   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods
	I1014 08:46:18.237500   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:18.237500   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:18.237500   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:18.249884   15224 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1014 08:46:18.249884   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:18.249884   15224 round_trippers.go:580]     Audit-Id: 52d22011-ca0d-4991-a7fe-70d33b5c75f4
	I1014 08:46:18.249884   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:18.249884   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:18.249884   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:18.249884   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:18.249884   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:18 GMT
	I1014 08:46:18.251203   15224 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1858"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 91046 chars]
	I1014 08:46:18.258352   15224 system_pods.go:59] 12 kube-system pods found
	I1014 08:46:18.258352   15224 system_pods.go:61] "coredns-7c65d6cfc9-fs9ct" [fd736862-9e3e-4a3d-9a86-08efd2338477] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 08:46:18.258352   15224 system_pods.go:61] "etcd-multinode-671000" [098aece2-cb2c-470a-878a-872417e4387f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 08:46:18.258454   15224 system_pods.go:61] "kindnet-5rqxq" [480b1f88-eb32-4638-9834-2be17b8d35ed] Running
	I1014 08:46:18.258454   15224 system_pods.go:61] "kindnet-rgbjf" [445ff184-85e8-4153-a3d0-a0185c4f95de] Running
	I1014 08:46:18.258454   15224 system_pods.go:61] "kindnet-wqrx6" [a508bbf9-7565-4c73-98cb-e9684985c298] Running
	I1014 08:46:18.258454   15224 system_pods.go:61] "kube-apiserver-multinode-671000" [64595feb-e6e8-4e69-a4b7-6459d15e3beb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 08:46:18.258527   15224 system_pods.go:61] "kube-controller-manager-multinode-671000" [a5c7bb80-c844-476f-ba47-1cd4e599b92d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 08:46:18.258527   15224 system_pods.go:61] "kube-proxy-kbpjf" [004b7f38-fa3b-4c2c-9524-8d5b1ba514e9] Running
	I1014 08:46:18.258527   15224 system_pods.go:61] "kube-proxy-n6txs" [796a44f9-2067-438d-9359-34d5f968c861] Running
	I1014 08:46:18.258527   15224 system_pods.go:61] "kube-proxy-r74dx" [f8d14473-8859-4015-84e9-d00656cc00c9] Running
	I1014 08:46:18.258527   15224 system_pods.go:61] "kube-scheduler-multinode-671000" [97febcab-f54d-4338-ba7c-2dc5e69b77fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 08:46:18.258527   15224 system_pods.go:61] "storage-provisioner" [fde8ff75-bc7f-4db4-b098-c3a08b38d205] Running
	I1014 08:46:18.258527   15224 system_pods.go:74] duration metric: took 21.0609ms to wait for pod list to return data ...
	I1014 08:46:18.258527   15224 node_conditions.go:102] verifying NodePressure condition ...
	I1014 08:46:18.258527   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes
	I1014 08:46:18.258527   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:18.258527   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:18.258527   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:18.346618   15224 round_trippers.go:574] Response Status: 200 OK in 88 milliseconds
	I1014 08:46:18.346716   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:18.346766   15224 round_trippers.go:580]     Audit-Id: b2743d1a-1144-484b-bf9a-6b50e65fcd86
	I1014 08:46:18.346766   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:18.346766   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:18.346766   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:18.346766   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:18.346766   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:18 GMT
	I1014 08:46:18.347019   15224 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1866"},"items":[{"metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1816","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16290 chars]
	I1014 08:46:18.349119   15224 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 08:46:18.349204   15224 node_conditions.go:123] node cpu capacity is 2
	I1014 08:46:18.349243   15224 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 08:46:18.349243   15224 node_conditions.go:123] node cpu capacity is 2
	I1014 08:46:18.349243   15224 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 08:46:18.349243   15224 node_conditions.go:123] node cpu capacity is 2
	I1014 08:46:18.349287   15224 node_conditions.go:105] duration metric: took 90.7154ms to run NodePressure ...
	I1014 08:46:18.349328   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 08:46:19.045852   15224 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1014 08:46:19.045882   15224 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1014 08:46:19.045954   15224 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1014 08:46:19.046204   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I1014 08:46:19.046228   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:19.046228   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:19.046266   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:19.056097   15224 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1014 08:46:19.056170   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:19.056170   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:19.056170   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:19.056170   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:19 GMT
	I1014 08:46:19.056170   15224 round_trippers.go:580]     Audit-Id: 0afa301e-6abd-47f5-b7b7-da29b01e34e8
	I1014 08:46:19.056170   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:19.056170   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:19.057046   15224 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1915"},"items":[{"metadata":{"name":"etcd-multinode-671000","namespace":"kube-system","uid":"098aece2-cb2c-470a-878a-872417e4387f","resourceVersion":"1852","creationTimestamp":"2024-10-14T15:46:17Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.106.123:2379","kubernetes.io/config.hash":"679486a8f27de5805bc2e87fb1920dce","kubernetes.io/config.mirror":"679486a8f27de5805bc2e87fb1920dce","kubernetes.io/config.seen":"2024-10-14T15:46:09.843414705Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 31353 chars]
	I1014 08:46:19.058480   15224 kubeadm.go:739] kubelet initialised
	I1014 08:46:19.058480   15224 kubeadm.go:740] duration metric: took 12.5255ms waiting for restarted kubelet to initialise ...
	I1014 08:46:19.058480   15224 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 08:46:19.058480   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods
	I1014 08:46:19.058480   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:19.058480   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:19.058480   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:19.065139   15224 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:46:19.065324   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:19.065479   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:19.065497   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:19.065497   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:19.065532   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:19.065532   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:19 GMT
	I1014 08:46:19.065532   15224 round_trippers.go:580]     Audit-Id: ef054b89-30ee-4760-a876-0f8d7ea29aef
	I1014 08:46:19.066752   15224 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1915"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 91046 chars]
	I1014 08:46:19.070969   15224 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace to be "Ready" ...
	I1014 08:46:19.071629   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:46:19.071629   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:19.071629   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:19.071721   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:19.074421   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:46:19.074421   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:19.074421   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:19.074421   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:19.074421   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:19 GMT
	I1014 08:46:19.074421   15224 round_trippers.go:580]     Audit-Id: 3832a640-eb73-40a8-a3ee-e7e00c00cd72
	I1014 08:46:19.074421   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:19.074421   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:19.074421   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:46:19.076009   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:19.076104   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:19.076104   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:19.076104   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:19.078333   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:46:19.078727   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:19.078727   15224 round_trippers.go:580]     Audit-Id: 9544093b-6526-4687-bb61-322267e43d93
	I1014 08:46:19.078727   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:19.078727   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:19.078727   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:19.078727   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:19.078727   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:19 GMT
	I1014 08:46:19.079153   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:19.079770   15224 pod_ready.go:98] node "multinode-671000" hosting pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000" has status "Ready":"False"
	I1014 08:46:19.079872   15224 pod_ready.go:82] duration metric: took 8.8343ms for pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace to be "Ready" ...
	E1014 08:46:19.079872   15224 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-671000" hosting pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000" has status "Ready":"False"
	I1014 08:46:19.079906   15224 pod_ready.go:79] waiting up to 4m0s for pod "etcd-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:46:19.080037   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-671000
	I1014 08:46:19.080086   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:19.080086   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:19.080131   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:19.083388   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:46:19.083388   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:19.083388   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:19 GMT
	I1014 08:46:19.083388   15224 round_trippers.go:580]     Audit-Id: fe412dc9-9d4e-48e0-9c15-f94fe77520dd
	I1014 08:46:19.083388   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:19.083388   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:19.083388   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:19.083388   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:19.083388   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-671000","namespace":"kube-system","uid":"098aece2-cb2c-470a-878a-872417e4387f","resourceVersion":"1852","creationTimestamp":"2024-10-14T15:46:17Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.106.123:2379","kubernetes.io/config.hash":"679486a8f27de5805bc2e87fb1920dce","kubernetes.io/config.mirror":"679486a8f27de5805bc2e87fb1920dce","kubernetes.io/config.seen":"2024-10-14T15:46:09.843414705Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6841 chars]
	I1014 08:46:19.084815   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:19.084892   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:19.084892   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:19.084892   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:19.087129   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:46:19.087129   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:19.087129   15224 round_trippers.go:580]     Audit-Id: 7bbf2564-bdde-4fe5-8eab-179b929f9aec
	I1014 08:46:19.087129   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:19.087129   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:19.087129   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:19.087129   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:19.088140   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:19 GMT
	I1014 08:46:19.088446   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:19.089051   15224 pod_ready.go:98] node "multinode-671000" hosting pod "etcd-multinode-671000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000" has status "Ready":"False"
	I1014 08:46:19.089129   15224 pod_ready.go:82] duration metric: took 9.2232ms for pod "etcd-multinode-671000" in "kube-system" namespace to be "Ready" ...
	E1014 08:46:19.089129   15224 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-671000" hosting pod "etcd-multinode-671000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000" has status "Ready":"False"
	I1014 08:46:19.089129   15224 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:46:19.089215   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-671000
	I1014 08:46:19.089272   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:19.089314   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:19.089314   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:19.092214   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:46:19.092538   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:19.092538   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:19 GMT
	I1014 08:46:19.092538   15224 round_trippers.go:580]     Audit-Id: 96221490-98ce-4d13-b34c-1c50eb001ae3
	I1014 08:46:19.092538   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:19.092538   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:19.092538   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:19.092538   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:19.092768   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-671000","namespace":"kube-system","uid":"64595feb-e6e8-4e69-a4b7-6459d15e3beb","resourceVersion":"1823","creationTimestamp":"2024-10-14T15:46:15Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.106.123:8443","kubernetes.io/config.hash":"864b69f35cb25e9dd5d87a753a055a10","kubernetes.io/config.mirror":"864b69f35cb25e9dd5d87a753a055a10","kubernetes.io/config.seen":"2024-10-14T15:46:09.765946769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:46:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8293 chars]
	I1014 08:46:19.093732   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:19.093788   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:19.093788   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:19.093848   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:19.103021   15224 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1014 08:46:19.103021   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:19.103021   15224 round_trippers.go:580]     Audit-Id: 307ab35a-40bf-407e-8717-b78948461267
	I1014 08:46:19.103021   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:19.103021   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:19.103021   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:19.103021   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:19.103021   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:19 GMT
	I1014 08:46:19.103021   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:19.103911   15224 pod_ready.go:98] node "multinode-671000" hosting pod "kube-apiserver-multinode-671000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000" has status "Ready":"False"
	I1014 08:46:19.104117   15224 pod_ready.go:82] duration metric: took 14.9876ms for pod "kube-apiserver-multinode-671000" in "kube-system" namespace to be "Ready" ...
	E1014 08:46:19.104117   15224 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-671000" hosting pod "kube-apiserver-multinode-671000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000" has status "Ready":"False"
	I1014 08:46:19.104213   15224 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:46:19.104304   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-671000
	I1014 08:46:19.104304   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:19.104304   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:19.104304   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:19.106845   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:46:19.106845   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:19.106845   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:19.107248   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:19.107248   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:19.107248   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:19 GMT
	I1014 08:46:19.107248   15224 round_trippers.go:580]     Audit-Id: eb45ac40-5b1e-4f52-b530-f756b4823b45
	I1014 08:46:19.107248   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:19.107351   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-671000","namespace":"kube-system","uid":"a5c7bb80-c844-476f-ba47-1cd4e599b92d","resourceVersion":"1821","creationTimestamp":"2024-10-14T15:22:39Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b83e31864e0ae98d29d960866012ecb0","kubernetes.io/config.mirror":"b83e31864e0ae98d29d960866012ecb0","kubernetes.io/config.seen":"2024-10-14T15:22:39.775213119Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I1014 08:46:19.107929   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:19.107929   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:19.107929   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:19.108102   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:19.109819   15224 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1014 08:46:19.109819   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:19.109819   15224 round_trippers.go:580]     Audit-Id: 0fa139c8-1acb-45ed-a3ec-61712883e1c2
	I1014 08:46:19.109819   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:19.109819   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:19.109819   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:19.109819   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:19.109819   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:19 GMT
	I1014 08:46:19.110444   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:19.111112   15224 pod_ready.go:98] node "multinode-671000" hosting pod "kube-controller-manager-multinode-671000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000" has status "Ready":"False"
	I1014 08:46:19.111112   15224 pod_ready.go:82] duration metric: took 6.8981ms for pod "kube-controller-manager-multinode-671000" in "kube-system" namespace to be "Ready" ...
	E1014 08:46:19.111112   15224 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-671000" hosting pod "kube-controller-manager-multinode-671000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000" has status "Ready":"False"
	I1014 08:46:19.111112   15224 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kbpjf" in "kube-system" namespace to be "Ready" ...
	I1014 08:46:19.246940   15224 request.go:632] Waited for 135.8283ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kbpjf
	I1014 08:46:19.246940   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kbpjf
	I1014 08:46:19.246940   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:19.246940   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:19.246940   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:19.252552   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:19.252552   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:19.252552   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:19.252552   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:19.252552   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:19.252552   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:19 GMT
	I1014 08:46:19.252552   15224 round_trippers.go:580]     Audit-Id: f3866a4a-4654-44ea-9a3c-a727cefd5824
	I1014 08:46:19.252552   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:19.252552   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kbpjf","generateName":"kube-proxy-","namespace":"kube-system","uid":"004b7f38-fa3b-4c2c-9524-8d5b1ba514e9","resourceVersion":"1803","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e5217416-1ed2-4ea5-ae9b-ee7bcdba0d2a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e5217416-1ed2-4ea5-ae9b-ee7bcdba0d2a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6433 chars]
	I1014 08:46:19.446206   15224 request.go:632] Waited for 192.3619ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.106.123:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:46:19.446206   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:46:19.446206   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:19.446206   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:19.446206   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:19.450219   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:19.450219   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:19.450219   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:19.450219   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:19.450219   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:19 GMT
	I1014 08:46:19.450312   15224 round_trippers.go:580]     Audit-Id: 187a228c-30d5-43ec-a369-8f77969b7532
	I1014 08:46:19.450312   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:19.450312   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:19.450437   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"1802","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4487 chars]
	I1014 08:46:19.451051   15224 pod_ready.go:98] node "multinode-671000-m02" hosting pod "kube-proxy-kbpjf" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000-m02" has status "Ready":"Unknown"
	I1014 08:46:19.451256   15224 pod_ready.go:82] duration metric: took 339.9386ms for pod "kube-proxy-kbpjf" in "kube-system" namespace to be "Ready" ...
	E1014 08:46:19.451278   15224 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-671000-m02" hosting pod "kube-proxy-kbpjf" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000-m02" has status "Ready":"Unknown"
	I1014 08:46:19.451278   15224 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-n6txs" in "kube-system" namespace to be "Ready" ...
	I1014 08:46:19.646165   15224 request.go:632] Waited for 194.7864ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n6txs
	I1014 08:46:19.646165   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n6txs
	I1014 08:46:19.646165   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:19.646165   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:19.646165   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:19.649574   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:46:19.649574   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:19.649574   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:19.649574   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:19.650596   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:19.650596   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:19.650623   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:19 GMT
	I1014 08:46:19.650623   15224 round_trippers.go:580]     Audit-Id: 748ea5df-9734-42af-840e-3ee07707fa9b
	I1014 08:46:19.651257   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-n6txs","generateName":"kube-proxy-","namespace":"kube-system","uid":"796a44f9-2067-438d-9359-34d5f968c861","resourceVersion":"1784","creationTimestamp":"2024-10-14T15:30:35Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e5217416-1ed2-4ea5-ae9b-ee7bcdba0d2a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:30:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e5217416-1ed2-4ea5-ae9b-ee7bcdba0d2a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6428 chars]
	I1014 08:46:19.846137   15224 request.go:632] Waited for 194.6268ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.106.123:8443/api/v1/nodes/multinode-671000-m03
	I1014 08:46:19.846137   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000-m03
	I1014 08:46:19.846137   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:19.846137   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:19.846137   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:19.851717   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:19.851717   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:19.851717   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:19 GMT
	I1014 08:46:19.851717   15224 round_trippers.go:580]     Audit-Id: d6dc5256-6236-4401-91fd-3938710e1e67
	I1014 08:46:19.851717   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:19.851717   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:19.851717   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:19.851717   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:19.851717   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m03","uid":"a7ea02fb-ac24-4430-adbc-9815c644cfa0","resourceVersion":"1897","creationTimestamp":"2024-10-14T15:41:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_41_35_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:41:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4303 chars]
	I1014 08:46:19.852403   15224 pod_ready.go:98] node "multinode-671000-m03" hosting pod "kube-proxy-n6txs" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000-m03" has status "Ready":"Unknown"
	I1014 08:46:19.852403   15224 pod_ready.go:82] duration metric: took 401.1243ms for pod "kube-proxy-n6txs" in "kube-system" namespace to be "Ready" ...
	E1014 08:46:19.852403   15224 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-671000-m03" hosting pod "kube-proxy-n6txs" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000-m03" has status "Ready":"Unknown"
	I1014 08:46:19.852403   15224 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-r74dx" in "kube-system" namespace to be "Ready" ...
	I1014 08:46:20.047159   15224 request.go:632] Waited for 194.7552ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r74dx
	I1014 08:46:20.047159   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r74dx
	I1014 08:46:20.047159   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:20.047159   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:20.047159   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:20.051858   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:20.051858   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:20.051858   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:20 GMT
	I1014 08:46:20.051858   15224 round_trippers.go:580]     Audit-Id: 716331ce-6faf-4057-94da-86ade670c50e
	I1014 08:46:20.051858   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:20.051858   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:20.051858   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:20.051858   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:20.051858   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-r74dx","generateName":"kube-proxy-","namespace":"kube-system","uid":"f8d14473-8859-4015-84e9-d00656cc00c9","resourceVersion":"1856","creationTimestamp":"2024-10-14T15:22:44Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e5217416-1ed2-4ea5-ae9b-ee7bcdba0d2a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e5217416-1ed2-4ea5-ae9b-ee7bcdba0d2a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6405 chars]
	I1014 08:46:20.247011   15224 request.go:632] Waited for 193.9468ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:20.247011   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:20.247011   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:20.247011   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:20.247011   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:20.252392   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:20.252392   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:20.252392   15224 round_trippers.go:580]     Audit-Id: e82a2ff6-4c6b-41ff-bfd6-29d0fcd979b0
	I1014 08:46:20.252498   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:20.252498   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:20.252498   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:20.252498   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:20.252498   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:20 GMT
	I1014 08:46:20.252842   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:20.253523   15224 pod_ready.go:98] node "multinode-671000" hosting pod "kube-proxy-r74dx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000" has status "Ready":"False"
	I1014 08:46:20.253523   15224 pod_ready.go:82] duration metric: took 401.1194ms for pod "kube-proxy-r74dx" in "kube-system" namespace to be "Ready" ...
	E1014 08:46:20.253523   15224 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-671000" hosting pod "kube-proxy-r74dx" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000" has status "Ready":"False"
	I1014 08:46:20.253523   15224 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:46:20.446344   15224 request.go:632] Waited for 192.8202ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-671000
	I1014 08:46:20.446344   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-671000
	I1014 08:46:20.446344   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:20.446344   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:20.446344   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:20.452363   15224 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:46:20.452532   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:20.452532   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:20.452532   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:20.452532   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:20.452532   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:20.452532   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:20 GMT
	I1014 08:46:20.452532   15224 round_trippers.go:580]     Audit-Id: a9a397a9-37dd-472d-87e8-017d88052826
	I1014 08:46:20.452912   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-671000","namespace":"kube-system","uid":"97febcab-f54d-4338-ba7c-2dc5e69b77fc","resourceVersion":"1819","creationTimestamp":"2024-10-14T15:22:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3e987cfaedc75c39145e8fc131c60c81","kubernetes.io/config.mirror":"3e987cfaedc75c39145e8fc131c60c81","kubernetes.io/config.seen":"2024-10-14T15:22:32.104995089Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5449 chars]
	I1014 08:46:20.646876   15224 request.go:632] Waited for 193.3118ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:20.647345   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:20.647345   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:20.647345   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:20.647345   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:20.651509   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:20.651509   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:20.651509   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:20 GMT
	I1014 08:46:20.651509   15224 round_trippers.go:580]     Audit-Id: 88263073-d4b3-499e-a6e0-046a8c95d6d3
	I1014 08:46:20.651509   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:20.651509   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:20.651509   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:20.651509   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:20.651509   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:20.652591   15224 pod_ready.go:98] node "multinode-671000" hosting pod "kube-scheduler-multinode-671000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000" has status "Ready":"False"
	I1014 08:46:20.652591   15224 pod_ready.go:82] duration metric: took 399.0672ms for pod "kube-scheduler-multinode-671000" in "kube-system" namespace to be "Ready" ...
	E1014 08:46:20.652591   15224 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-671000" hosting pod "kube-scheduler-multinode-671000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000" has status "Ready":"False"
	I1014 08:46:20.652591   15224 pod_ready.go:39] duration metric: took 1.5941087s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 08:46:20.652699   15224 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 08:46:20.672053   15224 command_runner.go:130] > -16
	I1014 08:46:20.672053   15224 ops.go:34] apiserver oom_adj: -16
	I1014 08:46:20.672053   15224 kubeadm.go:597] duration metric: took 13.8102629s to restartPrimaryControlPlane
	I1014 08:46:20.672053   15224 kubeadm.go:394] duration metric: took 13.8787245s to StartCluster
	I1014 08:46:20.672053   15224 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 08:46:20.672654   15224 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 08:46:20.674368   15224 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 08:46:20.676008   15224 start.go:235] Will wait 6m0s for node &{Name: IP:172.20.106.123 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1014 08:46:20.676008   15224 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 08:46:20.676008   15224 config.go:182] Loaded profile config "multinode-671000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 08:46:20.680839   15224 out.go:177] * Verifying Kubernetes components...
	I1014 08:46:20.684611   15224 out.go:177] * Enabled addons: 
	I1014 08:46:20.689229   15224 addons.go:510] duration metric: took 13.2209ms for enable addons: enabled=[]
	I1014 08:46:20.696921   15224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:46:20.962750   15224 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 08:46:20.989170   15224 node_ready.go:35] waiting up to 6m0s for node "multinode-671000" to be "Ready" ...
	I1014 08:46:20.989170   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:20.989170   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:20.989170   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:20.989170   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:20.993850   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:20.993920   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:20.993920   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:20.993920   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:20.993920   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:20.993920   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:20.993920   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:20 GMT
	I1014 08:46:20.994009   15224 round_trippers.go:580]     Audit-Id: 7464651d-7d50-4a2f-bf97-57247a07d5fc
	I1014 08:46:20.995204   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:21.490070   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:21.490070   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:21.490070   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:21.490070   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:21.495003   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:21.495114   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:21.495114   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:21 GMT
	I1014 08:46:21.495196   15224 round_trippers.go:580]     Audit-Id: 69fcf899-fb08-436b-b860-9d7bf5403e18
	I1014 08:46:21.495263   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:21.495263   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:21.495263   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:21.495263   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:21.495492   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:21.989789   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:21.989856   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:21.989856   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:21.989856   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:21.994919   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:21.994919   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:21.995025   15224 round_trippers.go:580]     Audit-Id: 4591f3ee-302b-4b1d-bc3b-8f40dd26e8d1
	I1014 08:46:21.995025   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:21.995025   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:21.995025   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:21.995025   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:21.995025   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:21 GMT
	I1014 08:46:21.995122   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:22.489573   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:22.489573   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:22.489573   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:22.489573   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:22.494198   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:22.494867   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:22.494867   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:22.494867   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:22.494867   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:22 GMT
	I1014 08:46:22.494867   15224 round_trippers.go:580]     Audit-Id: f3460bd5-b9fa-4bc5-98f4-8bbd9559aedf
	I1014 08:46:22.494867   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:22.494867   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:22.495223   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:22.989693   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:22.989693   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:22.989693   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:22.989693   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:23.000733   15224 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1014 08:46:23.000733   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:23.000733   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:23.000733   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:23.000733   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:23 GMT
	I1014 08:46:23.000733   15224 round_trippers.go:580]     Audit-Id: 37ba7b53-0164-4e4a-92fc-d738109fbe97
	I1014 08:46:23.000733   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:23.000733   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:23.000733   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:23.001726   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:23.489664   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:23.489664   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:23.489664   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:23.489664   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:23.493925   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:23.493925   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:23.493925   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:23.494022   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:23.494022   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:23.494022   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:23.494022   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:23 GMT
	I1014 08:46:23.494022   15224 round_trippers.go:580]     Audit-Id: fbe77ef9-9725-42c9-9a43-fe0648d2785b
	I1014 08:46:23.494315   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:23.989306   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:23.989306   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:23.989306   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:23.989306   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:23.994245   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:23.994329   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:23.994329   15224 round_trippers.go:580]     Audit-Id: a8753d30-2702-4929-bde5-81de62393e5b
	I1014 08:46:23.994329   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:23.994329   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:23.994329   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:23.994329   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:23.994329   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:23 GMT
	I1014 08:46:23.994728   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:24.496715   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:24.496715   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:24.496841   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:24.496841   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:24.500912   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:24.501023   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:24.501023   15224 round_trippers.go:580]     Audit-Id: e052191c-bf0d-4f02-af7b-c2736a935942
	I1014 08:46:24.501023   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:24.501023   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:24.501023   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:24.501023   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:24.501023   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:24 GMT
	I1014 08:46:24.501367   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:24.989449   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:24.989449   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:24.989449   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:24.989449   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:24.993681   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:24.993681   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:24.993681   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:24.993681   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:24.993681   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:24 GMT
	I1014 08:46:24.993851   15224 round_trippers.go:580]     Audit-Id: 9d37bd0a-7988-43a3-aa0b-159b6a7eec19
	I1014 08:46:24.993851   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:24.993851   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:24.994137   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:25.489868   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:25.489943   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:25.489943   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:25.489943   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:25.496479   15224 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:46:25.496479   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:25.496479   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:25.496479   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:25.496479   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:25.496479   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:25 GMT
	I1014 08:46:25.496479   15224 round_trippers.go:580]     Audit-Id: e1f097b2-0a02-4c90-bd33-f95a4c1b08bd
	I1014 08:46:25.496479   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:25.496479   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:25.497321   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:25.990029   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:25.990029   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:25.990029   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:25.990029   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:25.994352   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:25.994796   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:25.994796   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:25.994796   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:25 GMT
	I1014 08:46:25.994796   15224 round_trippers.go:580]     Audit-Id: 90663653-f659-418b-8bc5-ac54bbaab39f
	I1014 08:46:25.994796   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:25.994796   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:25.994796   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:25.995247   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:26.489549   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:26.489549   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:26.490181   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:26.490181   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:26.495265   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:26.495331   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:26.495331   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:26.495408   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:26.495408   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:26.495408   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:26.495408   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:26 GMT
	I1014 08:46:26.495408   15224 round_trippers.go:580]     Audit-Id: 25742e4f-471f-40b2-834c-a84f8f670590
	I1014 08:46:26.495610   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:26.989919   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:26.990457   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:26.990457   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:26.990457   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:27.005402   15224 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1014 08:46:27.005402   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:27.005402   15224 round_trippers.go:580]     Audit-Id: 6457596b-92b6-46c1-b4a5-c5635f465c51
	I1014 08:46:27.005402   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:27.005402   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:27.005402   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:27.005402   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:27.005402   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:27 GMT
	I1014 08:46:27.005402   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:27.490507   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:27.490600   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:27.490600   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:27.490600   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:27.494888   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:27.494962   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:27.494962   15224 round_trippers.go:580]     Audit-Id: ddfa7714-e88c-48c2-8ff7-53c248cddda8
	I1014 08:46:27.495019   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:27.495019   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:27.495019   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:27.495019   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:27.495019   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:27 GMT
	I1014 08:46:27.495083   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:27.990174   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:27.990174   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:27.990174   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:27.990174   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:27.995608   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:27.995684   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:27.995684   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:27.995765   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:27 GMT
	I1014 08:46:27.995765   15224 round_trippers.go:580]     Audit-Id: 01f27a49-39f4-46da-9a7e-28bcfb69916a
	I1014 08:46:27.995765   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:27.995765   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:27.995765   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:27.995903   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:27.996877   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:28.490581   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:28.490656   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:28.490656   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:28.490656   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:28.495236   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:28.495301   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:28.495301   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:28.495301   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:28.495301   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:28.495301   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:28 GMT
	I1014 08:46:28.495301   15224 round_trippers.go:580]     Audit-Id: 967a4b9d-0ad3-46a9-b21f-72fae183488c
	I1014 08:46:28.495301   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:28.495730   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:28.989660   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:28.989660   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:28.989660   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:28.989660   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:28.993840   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:28.994392   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:28.994392   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:28 GMT
	I1014 08:46:28.994392   15224 round_trippers.go:580]     Audit-Id: 4f8d1207-1ceb-4deb-89d7-efb8f832d8d0
	I1014 08:46:28.994392   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:28.994392   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:28.994392   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:28.994392   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:28.994826   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:29.489481   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:29.489481   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:29.490118   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:29.490118   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:29.494219   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:29.494291   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:29.494291   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:29.494291   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:29.494291   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:29.494291   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:29 GMT
	I1014 08:46:29.494291   15224 round_trippers.go:580]     Audit-Id: ba854cb9-1628-4230-88e6-6b29de214981
	I1014 08:46:29.494291   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:29.494291   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:29.989475   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:29.989475   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:29.989475   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:29.989475   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:29.993484   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:29.993484   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:29.993484   15224 round_trippers.go:580]     Audit-Id: ba30fc62-10d7-49ed-9b11-72d825c5536a
	I1014 08:46:29.993484   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:29.993484   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:29.993484   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:29.993484   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:29.993484   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:29 GMT
	I1014 08:46:29.993484   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:30.489983   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:30.489983   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:30.489983   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:30.489983   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:30.494949   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:30.495054   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:30.495054   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:30.495054   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:30.495054   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:30.495054   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:30 GMT
	I1014 08:46:30.495054   15224 round_trippers.go:580]     Audit-Id: 281929cf-ac7a-428b-b06b-18a2823ea343
	I1014 08:46:30.495054   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:30.495378   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:30.496048   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:30.990110   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:30.990110   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:30.990110   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:30.990110   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:30.995119   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:30.995119   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:30.995119   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:30.995119   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:30.995119   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:30 GMT
	I1014 08:46:30.995119   15224 round_trippers.go:580]     Audit-Id: 839313f2-a3a3-41c1-a9a3-b0cfbe670573
	I1014 08:46:30.995119   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:30.995119   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:30.995119   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:31.490036   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:31.490036   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:31.490036   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:31.490036   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:31.495114   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:31.495202   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:31.495202   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:31.495202   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:31.495202   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:31 GMT
	I1014 08:46:31.495278   15224 round_trippers.go:580]     Audit-Id: 731a244d-c39d-4c3a-8ca1-2ad9cebe906d
	I1014 08:46:31.495278   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:31.495278   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:31.496329   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:31.989401   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:31.989401   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:31.989401   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:31.989401   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:31.994608   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:31.994684   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:31.994778   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:31 GMT
	I1014 08:46:31.994778   15224 round_trippers.go:580]     Audit-Id: 1c2c5522-6771-4faa-abac-381f1772deb5
	I1014 08:46:31.994778   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:31.994778   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:31.994778   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:31.994778   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:31.995322   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:32.489923   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:32.489923   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:32.489923   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:32.489923   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:32.495311   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:32.495404   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:32.495404   15224 round_trippers.go:580]     Audit-Id: 8a13ff28-51ab-47d3-a487-c7067b004aaa
	I1014 08:46:32.495404   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:32.495404   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:32.495404   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:32.495404   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:32.495404   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:32 GMT
	I1014 08:46:32.495662   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:32.989950   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:32.989950   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:32.989950   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:32.989950   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:32.994813   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:32.994945   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:32.995010   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:32.995010   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:32 GMT
	I1014 08:46:32.995010   15224 round_trippers.go:580]     Audit-Id: a807225b-7234-4720-817c-dd74eaf7bb3d
	I1014 08:46:32.995010   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:32.995010   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:32.995010   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:32.995010   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:32.995930   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:33.490366   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:33.490366   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:33.490366   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:33.490366   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:33.494840   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:33.494965   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:33.494965   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:33 GMT
	I1014 08:46:33.494965   15224 round_trippers.go:580]     Audit-Id: c29c4850-ebb6-4705-83ec-0b0483df99f2
	I1014 08:46:33.494965   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:33.494965   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:33.494965   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:33.494965   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:33.495156   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:33.989373   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:33.989373   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:33.989373   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:33.989373   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:33.994100   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:33.994171   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:33.994171   15224 round_trippers.go:580]     Audit-Id: 009cd9f4-48dd-48a4-994f-9f2bf54e56aa
	I1014 08:46:33.994231   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:33.994231   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:33.994231   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:33.994231   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:33.994231   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:33 GMT
	I1014 08:46:33.994750   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:34.489959   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:34.489959   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:34.489959   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:34.489959   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:34.494602   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:34.494602   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:34.494602   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:34.494742   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:34.494742   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:34.494742   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:34.494742   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:34 GMT
	I1014 08:46:34.494742   15224 round_trippers.go:580]     Audit-Id: b5628196-9f24-46db-9e1b-76596ab7641f
	I1014 08:46:34.495174   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:34.989658   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:34.989658   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:34.989658   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:34.989658   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:34.995512   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:34.995512   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:34.995512   15224 round_trippers.go:580]     Audit-Id: 21863c3c-5205-4b95-bc17-463765c6acbd
	I1014 08:46:34.995512   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:34.995650   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:34.995650   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:34.995650   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:34.995650   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:34 GMT
	I1014 08:46:34.996270   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:34.996831   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:35.489329   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:35.489329   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:35.489329   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:35.489329   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:35.493961   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:35.493961   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:35.494027   15224 round_trippers.go:580]     Audit-Id: 67262c8a-545e-48ca-ab0e-016585502540
	I1014 08:46:35.494027   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:35.494027   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:35.494027   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:35.494027   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:35.494027   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:35 GMT
	I1014 08:46:35.494027   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:35.989339   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:35.989339   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:35.989339   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:35.989339   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:35.993558   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:35.993558   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:35.993558   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:35.993558   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:35.993755   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:35 GMT
	I1014 08:46:35.993755   15224 round_trippers.go:580]     Audit-Id: b2c42c30-190f-437f-99bc-b7442cab2daf
	I1014 08:46:35.993755   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:35.993755   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:35.994348   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:36.489305   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:36.489305   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:36.489305   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:36.489305   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:36.494871   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:36.494871   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:36.494871   15224 round_trippers.go:580]     Audit-Id: 7b7650a4-e14c-42a5-8351-49c336ef59a2
	I1014 08:46:36.495413   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:36.495413   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:36.495413   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:36.495413   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:36.495413   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:36 GMT
	I1014 08:46:36.495639   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:36.989664   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:36.989664   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:36.989664   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:36.989664   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:36.995052   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:36.995052   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:36.995131   15224 round_trippers.go:580]     Audit-Id: d131c52b-39ea-4d2c-a158-bc5b31a61e5d
	I1014 08:46:36.995131   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:36.995131   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:36.995131   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:36.995131   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:36.995131   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:36 GMT
	I1014 08:46:36.995553   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:37.490073   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:37.490183   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:37.490183   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:37.490183   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:37.495848   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:37.495944   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:37.495944   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:37.495944   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:37.495944   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:37 GMT
	I1014 08:46:37.495944   15224 round_trippers.go:580]     Audit-Id: b64a4090-20c4-4569-b622-4e31f5e9097c
	I1014 08:46:37.496017   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:37.496017   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:37.496324   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:37.497030   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:37.989954   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:37.989954   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:37.989954   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:37.989954   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:37.994239   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:37.994372   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:37.994372   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:37.994372   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:37.994372   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:37.994372   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:37.994372   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:37 GMT
	I1014 08:46:37.994372   15224 round_trippers.go:580]     Audit-Id: 977cae36-9de4-4e41-ae2f-047dc5d41284
	I1014 08:46:37.994768   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:38.489778   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:38.489778   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:38.489778   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:38.489778   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:38.495123   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:38.495218   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:38.495218   15224 round_trippers.go:580]     Audit-Id: d572e744-f4e9-4bef-b476-957159d67e33
	I1014 08:46:38.495218   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:38.495218   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:38.495218   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:38.495218   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:38.495218   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:38 GMT
	I1014 08:46:38.495471   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:38.989850   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:38.989850   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:38.989850   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:38.989850   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:38.993902   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:38.993993   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:38.993993   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:38.993993   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:38.993993   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:38.993993   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:38.994073   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:38 GMT
	I1014 08:46:38.994073   15224 round_trippers.go:580]     Audit-Id: 23a92d7a-2949-4129-a8da-ac9d3dcb3881
	I1014 08:46:38.995039   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:39.489385   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:39.489385   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:39.489385   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:39.489385   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:39.494487   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:39.494487   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:39.494577   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:39.494577   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:39.494577   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:39.494577   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:39 GMT
	I1014 08:46:39.494577   15224 round_trippers.go:580]     Audit-Id: 16e8e9f0-8b63-4207-b310-3326dea741ff
	I1014 08:46:39.494577   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:39.494798   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:39.989393   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:39.989393   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:39.989393   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:39.989393   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:39.998914   15224 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1014 08:46:39.999002   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:39.999084   15224 round_trippers.go:580]     Audit-Id: cb42705c-9bee-49ed-98ac-dbd4cfe3f8c5
	I1014 08:46:39.999084   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:39.999084   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:39.999084   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:39.999084   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:39.999084   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:40 GMT
	I1014 08:46:39.999543   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:39.999813   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:40.489712   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:40.489712   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:40.489712   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:40.489712   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:40.495782   15224 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:46:40.495782   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:40.495782   15224 round_trippers.go:580]     Audit-Id: 3f6e5b78-2574-4765-bf70-84f927d22f4f
	I1014 08:46:40.495782   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:40.495782   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:40.495782   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:40.495782   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:40.495782   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:40 GMT
	I1014 08:46:40.496243   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:40.989460   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:40.989460   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:40.989460   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:40.989460   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:40.994352   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:40.994460   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:40.994588   15224 round_trippers.go:580]     Audit-Id: 45f911a3-9906-4ae9-b83d-2ab36d1e83b2
	I1014 08:46:40.994588   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:40.994609   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:40.994609   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:40.994609   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:40.994609   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:40 GMT
	I1014 08:46:40.994766   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:41.489414   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:41.489414   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:41.489414   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:41.489414   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:41.496474   15224 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 08:46:41.496474   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:41.496560   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:41.496560   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:41 GMT
	I1014 08:46:41.496560   15224 round_trippers.go:580]     Audit-Id: b492ec8a-176c-4ec1-9d5e-39e50903b41c
	I1014 08:46:41.496560   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:41.496560   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:41.496761   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:41.496862   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:41.989927   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:41.989927   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:41.989927   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:41.989927   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:41.994844   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:41.994948   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:41.994948   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:41.994948   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:41.994948   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:41 GMT
	I1014 08:46:41.994948   15224 round_trippers.go:580]     Audit-Id: ebe96fa1-0464-49c2-a2a5-755ec8aa99e0
	I1014 08:46:41.994948   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:41.994948   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:41.995294   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:42.489724   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:42.489724   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:42.489724   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:42.489724   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:42.494852   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:42.494852   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:42.494852   15224 round_trippers.go:580]     Audit-Id: 27e616b5-1b0f-44c6-822a-da6ff38ab34b
	I1014 08:46:42.494852   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:42.494852   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:42.494852   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:42.494852   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:42.494852   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:42 GMT
	I1014 08:46:42.495305   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:42.496428   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:42.989752   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:42.989752   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:42.989752   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:42.989752   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:43.002330   15224 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1014 08:46:43.002451   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:43.002451   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:43 GMT
	I1014 08:46:43.002451   15224 round_trippers.go:580]     Audit-Id: 6338e68a-d919-40cc-9cea-dc4b1b255779
	I1014 08:46:43.002451   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:43.002451   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:43.002451   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:43.002451   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:43.002852   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:43.490135   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:43.490135   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:43.490135   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:43.490135   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:43.498057   15224 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 08:46:43.498057   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:43.498057   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:43.498057   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:43.498057   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:43.498057   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:43.498057   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:43 GMT
	I1014 08:46:43.498057   15224 round_trippers.go:580]     Audit-Id: 0ec2c709-8b4b-4e66-a422-286a261c3534
	I1014 08:46:43.498057   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:43.989303   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:43.989303   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:43.989303   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:43.989303   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:43.995671   15224 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:46:43.995945   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:43.995945   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:43.995945   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:43.995945   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:43.995945   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:43 GMT
	I1014 08:46:43.995945   15224 round_trippers.go:580]     Audit-Id: 97d3b00a-fecc-4473-8932-a14671c84e57
	I1014 08:46:43.995994   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:43.996413   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:44.490156   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:44.490156   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:44.490156   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:44.490156   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:44.494415   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:44.494500   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:44.494500   15224 round_trippers.go:580]     Audit-Id: 94d462bf-1072-416b-907a-70500d4dad49
	I1014 08:46:44.494500   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:44.494500   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:44.494500   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:44.494500   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:44.494500   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:44 GMT
	I1014 08:46:44.494925   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:44.990178   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:44.990178   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:44.990178   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:44.990178   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:44.995338   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:44.995457   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:44.995457   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:44.995457   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:44.995457   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:44.995548   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:44 GMT
	I1014 08:46:44.995548   15224 round_trippers.go:580]     Audit-Id: 42adf9a8-1575-40c8-9f65-dabe8574908d
	I1014 08:46:44.995548   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:44.996081   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:44.996918   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:45.489909   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:45.489909   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:45.489909   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:45.489909   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:45.493927   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:45.494926   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:45.494926   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:45.494926   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:45 GMT
	I1014 08:46:45.494926   15224 round_trippers.go:580]     Audit-Id: 27646770-9000-4604-8a4a-bdb69bbd9c82
	I1014 08:46:45.494926   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:45.495035   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:45.495035   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:45.495035   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:45.989467   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:45.989467   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:45.989467   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:45.989467   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:45.993688   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:45.993736   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:45.993736   15224 round_trippers.go:580]     Audit-Id: 705aa739-1e38-498f-9171-44fae4701e8a
	I1014 08:46:45.993736   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:45.993736   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:45.993736   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:45.993736   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:45.993736   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:45 GMT
	I1014 08:46:45.994360   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:46.489297   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:46.489297   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:46.489297   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:46.489297   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:46.493802   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:46.493864   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:46.493864   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:46.493864   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:46.493864   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:46.493864   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:46.493936   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:46 GMT
	I1014 08:46:46.493936   15224 round_trippers.go:580]     Audit-Id: 0f9a51bd-3ffb-4247-9dd7-4ac41c6f2d8d
	I1014 08:46:46.494086   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:46.989529   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:46.989529   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:46.989529   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:46.989529   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:46.994311   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:46.994311   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:46.994311   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:46.994311   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:46.994311   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:46 GMT
	I1014 08:46:46.994311   15224 round_trippers.go:580]     Audit-Id: 1b363a92-3f81-4227-9fa2-eaecc3268d56
	I1014 08:46:46.994311   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:46.994311   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:46.994805   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:47.490760   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:47.490850   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:47.490850   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:47.490850   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:47.494140   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:46:47.495075   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:47.495075   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:47.495075   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:47.495075   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:47.495075   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:47.495075   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:47 GMT
	I1014 08:46:47.495075   15224 round_trippers.go:580]     Audit-Id: a8ce0b07-9b0b-453c-ac24-80d0830afdcf
	I1014 08:46:47.495504   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:47.496400   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:47.990019   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:47.990107   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:47.990107   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:47.990107   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:47.995328   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:47.995328   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:47.995408   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:47.995408   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:47 GMT
	I1014 08:46:47.995408   15224 round_trippers.go:580]     Audit-Id: 73ad273e-b8d5-47c8-b513-5ad2cbe15613
	I1014 08:46:47.995408   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:47.995408   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:47.995408   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:47.995758   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:48.489749   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:48.489749   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:48.489749   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:48.489749   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:48.495427   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:48.495427   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:48.495427   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:48 GMT
	I1014 08:46:48.495427   15224 round_trippers.go:580]     Audit-Id: f3c40b99-dded-4472-b23e-a851851b597a
	I1014 08:46:48.495427   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:48.495427   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:48.495427   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:48.495427   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:48.495688   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:48.989696   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:48.989696   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:48.989696   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:48.989696   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:48.993668   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:46:48.993766   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:48.993766   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:48.993766   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:48 GMT
	I1014 08:46:48.993830   15224 round_trippers.go:580]     Audit-Id: 857dbda7-3121-4648-9747-4c54502a5f60
	I1014 08:46:48.993830   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:48.993830   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:48.993830   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:48.994024   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:49.489817   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:49.489817   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:49.489817   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:49.489817   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:49.494302   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:49.494563   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:49.494563   15224 round_trippers.go:580]     Audit-Id: 3b3708e8-6803-4386-9a89-a4442fab2d53
	I1014 08:46:49.494563   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:49.494563   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:49.494563   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:49.494563   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:49.494678   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:49 GMT
	I1014 08:46:49.495008   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:49.990217   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:49.990217   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:49.990217   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:49.990217   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:49.995090   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:49.995170   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:49.995170   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:49 GMT
	I1014 08:46:49.995170   15224 round_trippers.go:580]     Audit-Id: c9d555a8-f657-47c9-9ae1-8bd3dab1daff
	I1014 08:46:49.995170   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:49.995170   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:49.995170   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:49.995283   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:49.996318   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:49.996986   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:50.489350   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:50.489350   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:50.489350   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:50.489350   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:50.494281   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:50.494281   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:50.494281   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:50 GMT
	I1014 08:46:50.494281   15224 round_trippers.go:580]     Audit-Id: 81abafa1-f88a-4b50-8876-6f8549149675
	I1014 08:46:50.494281   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:50.494281   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:50.494417   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:50.494417   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:50.494579   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:50.989973   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:50.990095   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:50.990095   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:50.990095   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:50.994061   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:46:50.994061   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:50.994061   15224 round_trippers.go:580]     Audit-Id: e02df781-3978-4ddd-a97c-d19007c16b3c
	I1014 08:46:50.994146   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:50.994146   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:50.994146   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:50.994146   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:50.994146   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:50 GMT
	I1014 08:46:50.994433   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:51.489477   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:51.489477   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:51.489477   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:51.489477   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:51.494749   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:51.494929   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:51.494929   15224 round_trippers.go:580]     Audit-Id: 8f81d5ea-52eb-406a-8cb2-d2f107e35d1d
	I1014 08:46:51.494929   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:51.494929   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:51.494929   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:51.494929   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:51.494929   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:51 GMT
	I1014 08:46:51.495350   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:51.989691   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:51.989691   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:51.989691   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:51.989691   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:51.994933   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:51.994933   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:51.994933   15224 round_trippers.go:580]     Audit-Id: bf7d1a75-7588-4c37-abd8-3fc85705e86f
	I1014 08:46:51.994933   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:51.994933   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:51.994933   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:51.994933   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:51.994933   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:51 GMT
	I1014 08:46:51.995467   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:52.490142   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:52.490290   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:52.490290   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:52.490290   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:52.495473   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:52.495602   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:52.495602   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:52.495602   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:52.495667   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:52.495667   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:52.495667   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:52 GMT
	I1014 08:46:52.495667   15224 round_trippers.go:580]     Audit-Id: d9052630-29a4-408f-8b7b-a2fb03a6c8f9
	I1014 08:46:52.495967   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:52.496634   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:52.989455   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:52.989455   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:52.989455   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:52.989455   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:52.994309   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:52.994309   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:52.994309   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:52.994309   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:52.994309   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:52.994309   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:52 GMT
	I1014 08:46:52.994309   15224 round_trippers.go:580]     Audit-Id: 36577b42-b797-4b67-80a5-12b0603607e8
	I1014 08:46:52.994309   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:52.994897   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:53.489335   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:53.489335   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:53.489335   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:53.489335   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:53.493499   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:53.493499   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:53.493499   15224 round_trippers.go:580]     Audit-Id: e3c5a0e8-38f5-428a-8d10-37d7cbd5deed
	I1014 08:46:53.493499   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:53.493499   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:53.493499   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:53.493499   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:53.493499   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:53 GMT
	I1014 08:46:53.493499   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:53.990296   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:53.990296   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:53.990296   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:53.990296   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:53.995720   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:53.995720   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:53.995720   15224 round_trippers.go:580]     Audit-Id: 07ef5a7c-c75c-4d89-8350-a4869eb60e78
	I1014 08:46:53.995826   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:53.995826   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:53.995826   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:53.995826   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:53.995826   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:53 GMT
	I1014 08:46:53.996085   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:54.489398   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:54.489398   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:54.489398   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:54.489398   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:54.494244   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:54.494347   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:54.494347   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:54.494347   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:54.494347   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:54.494457   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:54.494473   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:54 GMT
	I1014 08:46:54.494473   15224 round_trippers.go:580]     Audit-Id: 0ca8a6a3-bcfe-4af1-ad8a-169d3adbc2bd
	I1014 08:46:54.494862   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:54.989766   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:54.989766   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:54.989766   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:54.989766   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:54.994205   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:54.994304   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:54.994384   15224 round_trippers.go:580]     Audit-Id: 2c1df583-84ae-4275-b742-82222889c9b2
	I1014 08:46:54.994384   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:54.994384   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:54.994384   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:54.994461   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:54.994461   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:54 GMT
	I1014 08:46:54.994876   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:54.995247   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:55.489610   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:55.489610   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:55.489610   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:55.489610   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:55.494524   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:55.494671   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:55.494671   15224 round_trippers.go:580]     Audit-Id: 082f932d-ec8e-4a2a-ada3-508bd59c62a8
	I1014 08:46:55.494671   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:55.494671   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:55.494671   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:55.494781   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:55.494781   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:55 GMT
	I1014 08:46:55.495539   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:55.990329   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:55.990413   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:55.990413   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:55.990413   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:55.994348   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:46:55.994423   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:55.994423   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:55.994423   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:55.994423   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:55.994423   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:55 GMT
	I1014 08:46:55.994423   15224 round_trippers.go:580]     Audit-Id: 6620601b-e8ab-4b6a-9010-258a9911c717
	I1014 08:46:55.994423   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:55.994916   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:56.489406   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:56.489406   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:56.489406   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:56.489406   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:56.495291   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:56.495291   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:56.495291   15224 round_trippers.go:580]     Audit-Id: 95d3dfb4-cab8-4836-8ea2-1e246f19b191
	I1014 08:46:56.495291   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:56.495291   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:56.495291   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:56.495291   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:56.495291   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:56 GMT
	I1014 08:46:56.495757   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:56.990798   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:56.990868   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:56.990868   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:56.990868   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:56.995406   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:56.995406   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:56.995406   15224 round_trippers.go:580]     Audit-Id: b5c1d4bf-a5b4-4852-a272-aa39896b6296
	I1014 08:46:56.995406   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:56.995533   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:56.995533   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:56.995533   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:56.995533   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:56 GMT
	I1014 08:46:56.995999   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:56.996721   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:57.489734   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:57.490409   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:57.490409   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:57.490409   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:57.497223   15224 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:46:57.497223   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:57.497765   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:57.497765   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:57.497765   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:57 GMT
	I1014 08:46:57.497765   15224 round_trippers.go:580]     Audit-Id: fcbc9d95-4dc2-4e1a-a620-4af521199e00
	I1014 08:46:57.497765   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:57.497765   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:57.498052   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:57.989810   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:57.989810   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:57.989810   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:57.989810   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:57.995111   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:57.995111   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:57.995111   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:57.995111   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:57.995111   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:57.995111   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:57.995111   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:57 GMT
	I1014 08:46:57.995280   15224 round_trippers.go:580]     Audit-Id: 4222d4c4-e7f8-444e-8503-911380b5e0dd
	I1014 08:46:57.995437   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:58.490171   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:58.490171   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:58.490171   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:58.490171   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:58.495054   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:58.495054   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:58.495190   15224 round_trippers.go:580]     Audit-Id: 9368c534-c02c-432c-8400-5909cd499382
	I1014 08:46:58.495190   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:58.495190   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:58.495190   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:58.495190   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:58.495190   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:58 GMT
	I1014 08:46:58.495565   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:58.990986   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:58.990986   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:58.991105   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:58.991105   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:58.996046   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:58.996182   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:58.996182   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:58.996182   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:58.996182   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:58.996182   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:58 GMT
	I1014 08:46:58.996182   15224 round_trippers.go:580]     Audit-Id: c67f02f0-e95e-45d1-bc36-d41e98e658c4
	I1014 08:46:58.996182   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:58.996579   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:58.997219   15224 node_ready.go:53] node "multinode-671000" has status "Ready":"False"
	I1014 08:46:59.490263   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:59.490263   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:59.490263   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:59.490263   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:59.495397   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:46:59.495482   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:59.495482   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:59.495571   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:59 GMT
	I1014 08:46:59.495571   15224 round_trippers.go:580]     Audit-Id: 5b632c91-9f87-46d1-a6cd-9f538d441472
	I1014 08:46:59.495571   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:59.495571   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:59.495571   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:59.495752   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1895","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5486 chars]
	I1014 08:46:59.989450   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:46:59.989450   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:59.989450   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:59.989450   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:59.994158   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:46:59.994224   15224 round_trippers.go:577] Response Headers:
	I1014 08:46:59.994224   15224 round_trippers.go:580]     Audit-Id: ce781060-8ec1-44c2-8c26-2d7adda6081d
	I1014 08:46:59.994224   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:46:59.994224   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:46:59.994224   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:46:59.994224   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:46:59.994224   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:46:59 GMT
	I1014 08:46:59.994675   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:46:59.995415   15224 node_ready.go:49] node "multinode-671000" has status "Ready":"True"
	I1014 08:46:59.995501   15224 node_ready.go:38] duration metric: took 39.0062602s for node "multinode-671000" to be "Ready" ...
	I1014 08:46:59.995624   15224 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 08:46:59.995728   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods
	I1014 08:46:59.995791   15224 round_trippers.go:469] Request Headers:
	I1014 08:46:59.995834   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:46:59.995834   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:46:59.999596   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:00.000468   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:00.000468   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:00.000468   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:00.000551   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:00.000551   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:00 GMT
	I1014 08:47:00.000551   15224 round_trippers.go:580]     Audit-Id: 00661f32-59b5-4493-85d1-37c2d2ec69d5
	I1014 08:47:00.000551   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:00.001722   15224 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1981"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90485 chars]
	I1014 08:47:00.006322   15224 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace to be "Ready" ...
	I1014 08:47:00.006852   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:00.006852   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:00.006852   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:00.006852   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:00.010413   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:00.010413   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:00.010959   15224 round_trippers.go:580]     Audit-Id: 2a6f04ea-daa5-47f0-91c0-1cd22fd3fdef
	I1014 08:47:00.010959   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:00.010959   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:00.010959   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:00.010959   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:00.010959   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:00 GMT
	I1014 08:47:00.011125   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:00.011664   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:00.011946   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:00.011946   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:00.011946   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:00.015300   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:00.015300   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:00.015300   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:00.015300   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:00.015300   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:00.015300   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:00.015300   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:00 GMT
	I1014 08:47:00.015300   15224 round_trippers.go:580]     Audit-Id: 33c1a51e-e3c3-4e7f-a0c9-e9f655238198
	I1014 08:47:00.015300   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:00.506917   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:00.506917   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:00.506917   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:00.506917   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:00.511099   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:00.511845   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:00.511845   15224 round_trippers.go:580]     Audit-Id: e6063197-bc1a-4dbb-957c-6c1f96de4807
	I1014 08:47:00.511845   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:00.511845   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:00.511845   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:00.511845   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:00.511845   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:00 GMT
	I1014 08:47:00.512146   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:00.513106   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:00.513106   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:00.513106   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:00.513106   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:00.515410   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:00.516010   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:00.516057   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:00.516057   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:00.516057   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:00.516057   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:00 GMT
	I1014 08:47:00.516057   15224 round_trippers.go:580]     Audit-Id: 966850eb-f317-44d0-a477-5f237ba79d0a
	I1014 08:47:00.516057   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:00.516184   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:01.006773   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:01.006773   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:01.006773   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:01.006773   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:01.011420   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:01.011420   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:01.011420   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:01.011420   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:01 GMT
	I1014 08:47:01.011529   15224 round_trippers.go:580]     Audit-Id: 17612ab4-44ae-468f-a147-4fd39fa3429b
	I1014 08:47:01.011529   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:01.011529   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:01.011529   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:01.011628   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:01.012748   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:01.012748   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:01.012836   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:01.012836   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:01.015372   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:01.016372   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:01.016372   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:01.016372   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:01 GMT
	I1014 08:47:01.016372   15224 round_trippers.go:580]     Audit-Id: c794ac18-a749-4264-938e-a5ece5b88a3c
	I1014 08:47:01.016372   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:01.016372   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:01.016372   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:01.016858   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:01.506525   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:01.506525   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:01.506525   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:01.506525   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:01.515311   15224 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1014 08:47:01.515500   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:01.515607   15224 round_trippers.go:580]     Audit-Id: 90af5554-ab19-4a16-9d30-debf4eee213c
	I1014 08:47:01.515629   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:01.515629   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:01.515629   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:01.515629   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:01.515629   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:01 GMT
	I1014 08:47:01.515629   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:01.516665   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:01.516665   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:01.516840   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:01.516840   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:01.522253   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:01.522253   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:01.522253   15224 round_trippers.go:580]     Audit-Id: 318e2a8f-9313-497b-880c-d640e0c4ccda
	I1014 08:47:01.522253   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:01.522253   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:01.522253   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:01.522253   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:01.522253   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:01 GMT
	I1014 08:47:01.522956   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:02.006385   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:02.007063   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:02.007063   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:02.007063   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:02.011482   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:02.011552   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:02.011629   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:02.011629   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:02.011629   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:02.011629   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:02.011629   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:02 GMT
	I1014 08:47:02.011629   15224 round_trippers.go:580]     Audit-Id: ac630c37-4e6f-483e-8137-1abf2e45cbd9
	I1014 08:47:02.012066   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:02.012970   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:02.013039   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:02.013039   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:02.013039   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:02.015828   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:02.015828   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:02.015828   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:02.015828   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:02 GMT
	I1014 08:47:02.015828   15224 round_trippers.go:580]     Audit-Id: 06e8133b-4791-4a3b-a538-6c542d2a8c22
	I1014 08:47:02.015828   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:02.015828   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:02.015828   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:02.016237   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:02.016774   15224 pod_ready.go:103] pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace has status "Ready":"False"
	I1014 08:47:02.507142   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:02.507226   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:02.507226   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:02.507226   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:02.511342   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:02.511342   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:02.511436   15224 round_trippers.go:580]     Audit-Id: 9f7a3529-fdca-43f7-8b1b-61d8f29d36ad
	I1014 08:47:02.511436   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:02.511436   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:02.511436   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:02.511436   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:02.511436   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:02 GMT
	I1014 08:47:02.512091   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:02.512959   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:02.513014   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:02.513014   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:02.513014   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:02.515897   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:02.515897   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:02.515897   15224 round_trippers.go:580]     Audit-Id: 102608cd-9d1b-404d-a652-368ac49fdc82
	I1014 08:47:02.515897   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:02.515897   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:02.515897   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:02.515897   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:02.515897   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:02 GMT
	I1014 08:47:02.515897   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:03.007073   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:03.007073   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:03.007073   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:03.007073   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:03.012190   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:03.012190   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:03.012190   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:03.012190   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:03.012349   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:03.012349   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:03.012349   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:03 GMT
	I1014 08:47:03.012349   15224 round_trippers.go:580]     Audit-Id: 23145347-e683-4d7a-814b-569f0c15a257
	I1014 08:47:03.012522   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:03.013465   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:03.013524   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:03.013524   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:03.013524   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:03.022157   15224 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1014 08:47:03.022157   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:03.022157   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:03 GMT
	I1014 08:47:03.022157   15224 round_trippers.go:580]     Audit-Id: 0304324c-83e4-4b30-a45f-95b24417cab2
	I1014 08:47:03.022157   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:03.022157   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:03.022157   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:03.022157   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:03.023151   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:03.507020   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:03.507020   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:03.507020   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:03.507020   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:03.515852   15224 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1014 08:47:03.515852   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:03.515852   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:03.515852   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:03.515852   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:03.515852   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:03 GMT
	I1014 08:47:03.515852   15224 round_trippers.go:580]     Audit-Id: 00ce8782-b3c5-418b-8a83-3b4eba2ad8da
	I1014 08:47:03.515852   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:03.515852   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:03.517418   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:03.517418   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:03.517418   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:03.517418   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:03.520702   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:03.520702   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:03.520702   15224 round_trippers.go:580]     Audit-Id: db73e000-7e18-4dbe-9c67-ebba8fb8f343
	I1014 08:47:03.520702   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:03.520702   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:03.520702   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:03.520702   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:03.520702   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:03 GMT
	I1014 08:47:03.520702   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:04.007090   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:04.007090   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:04.007090   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:04.007090   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:04.013222   15224 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:47:04.013222   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:04.013222   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:04 GMT
	I1014 08:47:04.013222   15224 round_trippers.go:580]     Audit-Id: 37596bf1-027e-4a33-804b-95394d501f4d
	I1014 08:47:04.013222   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:04.013222   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:04.013222   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:04.013222   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:04.013655   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:04.014633   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:04.014633   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:04.014633   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:04.014633   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:04.017871   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:04.017959   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:04.017959   15224 round_trippers.go:580]     Audit-Id: 9af1f259-13d3-472f-85bb-da8201f00842
	I1014 08:47:04.017959   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:04.017959   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:04.017959   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:04.017959   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:04.017959   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:04 GMT
	I1014 08:47:04.017959   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:04.018929   15224 pod_ready.go:103] pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace has status "Ready":"False"
	I1014 08:47:04.507142   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:04.507223   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:04.507223   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:04.507223   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:04.512321   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:04.512402   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:04.512508   15224 round_trippers.go:580]     Audit-Id: f2852b26-65ec-4e52-adb6-8e3f8bcf790b
	I1014 08:47:04.512508   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:04.512508   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:04.512508   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:04.512508   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:04.512572   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:04 GMT
	I1014 08:47:04.512761   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:04.513530   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:04.513530   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:04.513604   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:04.513604   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:04.516658   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:04.516658   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:04.516767   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:04.516767   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:04.516767   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:04 GMT
	I1014 08:47:04.516767   15224 round_trippers.go:580]     Audit-Id: a2b83566-4bad-4a91-8aa9-80ed347dabf6
	I1014 08:47:04.516767   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:04.516767   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:04.517095   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:05.007774   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:05.007853   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:05.007853   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:05.007853   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:05.011679   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:05.011679   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:05.011799   15224 round_trippers.go:580]     Audit-Id: 708ac586-f5ff-4af7-99d6-c53fb95089c3
	I1014 08:47:05.011799   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:05.011799   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:05.011799   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:05.011799   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:05.011903   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:05 GMT
	I1014 08:47:05.012191   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:05.013122   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:05.013122   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:05.013122   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:05.013122   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:05.016764   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:05.016872   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:05.016872   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:05.016872   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:05 GMT
	I1014 08:47:05.016872   15224 round_trippers.go:580]     Audit-Id: f9e1ee8e-1181-4ec9-b450-43010ae103d9
	I1014 08:47:05.016872   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:05.016872   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:05.016872   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:05.017839   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:05.507030   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:05.507706   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:05.507706   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:05.507706   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:05.512023   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:05.512023   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:05.512023   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:05.512023   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:05.512096   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:05.512096   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:05 GMT
	I1014 08:47:05.512096   15224 round_trippers.go:580]     Audit-Id: a2f37341-f157-495f-a7ce-1d46bbabc594
	I1014 08:47:05.512096   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:05.512325   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:05.513009   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:05.513009   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:05.513178   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:05.513178   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:05.516913   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:05.516970   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:05.516970   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:05 GMT
	I1014 08:47:05.517020   15224 round_trippers.go:580]     Audit-Id: 1cdfac50-9b2f-44bf-81de-280089b69120
	I1014 08:47:05.517020   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:05.517020   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:05.517020   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:05.517020   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:05.517479   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:06.007350   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:06.008037   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:06.008037   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:06.008037   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:06.013381   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:06.013502   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:06.013502   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:06.013502   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:06 GMT
	I1014 08:47:06.013643   15224 round_trippers.go:580]     Audit-Id: 3e765249-e89d-4ddc-8f0f-dc2eb05205e0
	I1014 08:47:06.013643   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:06.013643   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:06.013643   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:06.013933   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:06.014886   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:06.014943   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:06.014943   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:06.014943   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:06.018169   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:06.018169   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:06.018235   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:06.018235   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:06.018235   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:06.018235   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:06 GMT
	I1014 08:47:06.018235   15224 round_trippers.go:580]     Audit-Id: 69d90dd1-8c46-4162-ac62-55df851ff11c
	I1014 08:47:06.018235   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:06.018689   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:06.019232   15224 pod_ready.go:103] pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace has status "Ready":"False"
	I1014 08:47:06.506428   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:06.506428   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:06.506428   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:06.506428   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:06.511682   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:06.511766   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:06.511766   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:06.511766   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:06.511766   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:06 GMT
	I1014 08:47:06.511766   15224 round_trippers.go:580]     Audit-Id: fd06389c-4170-4969-b3ef-cc937b6dc64c
	I1014 08:47:06.511766   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:06.511766   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:06.512051   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:06.513133   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:06.513226   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:06.513226   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:06.513297   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:06.518571   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:06.518571   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:06.518571   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:06.518571   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:06.518571   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:06.518571   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:06 GMT
	I1014 08:47:06.518571   15224 round_trippers.go:580]     Audit-Id: ab8a2e71-e016-4381-94bf-4801cc8f440c
	I1014 08:47:06.518571   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:06.519092   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:07.006749   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:07.006749   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:07.006749   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:07.006749   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:07.011571   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:07.011667   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:07.011667   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:07.011667   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:07 GMT
	I1014 08:47:07.011667   15224 round_trippers.go:580]     Audit-Id: c218ebbe-21a8-4892-9852-1c01dfffcc96
	I1014 08:47:07.011667   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:07.011667   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:07.011667   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:07.011914   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:07.012147   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:07.012745   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:07.012745   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:07.012745   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:07.015174   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:07.015174   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:07.015174   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:07 GMT
	I1014 08:47:07.015174   15224 round_trippers.go:580]     Audit-Id: b1e11afd-9a6b-43b2-83c4-46902b554b7e
	I1014 08:47:07.015726   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:07.015726   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:07.015726   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:07.015726   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:07.015790   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:07.507147   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:07.507147   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:07.507147   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:07.507147   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:07.512494   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:07.512515   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:07.512515   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:07 GMT
	I1014 08:47:07.512515   15224 round_trippers.go:580]     Audit-Id: 31a843f6-ed99-44d1-b93d-3f93e09d9add
	I1014 08:47:07.512576   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:07.512576   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:07.512576   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:07.512576   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:07.512847   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:07.513892   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:07.513892   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:07.513892   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:07.513892   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:07.516537   15224 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1014 08:47:07.516590   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:07.516590   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:07 GMT
	I1014 08:47:07.516590   15224 round_trippers.go:580]     Audit-Id: 0d6c10fb-d992-49c0-9edc-5d660ea93dd2
	I1014 08:47:07.516590   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:07.516590   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:07.516590   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:07.516590   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:07.516835   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:08.007721   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:08.007721   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:08.007721   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:08.007825   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:08.012247   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:08.012450   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:08.012450   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:08.012450   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:08.012450   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:08.012450   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:08 GMT
	I1014 08:47:08.012450   15224 round_trippers.go:580]     Audit-Id: 520ff5a4-4bfd-40d1-a319-20c62f138073
	I1014 08:47:08.012450   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:08.012730   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:08.013718   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:08.013786   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:08.013786   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:08.013786   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:08.016793   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:08.016793   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:08.016887   15224 round_trippers.go:580]     Audit-Id: 4cbed752-f465-4e96-b986-f1a19c1c9c0d
	I1014 08:47:08.016887   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:08.016887   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:08.016887   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:08.016887   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:08.016887   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:08 GMT
	I1014 08:47:08.017219   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:08.507121   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:08.507711   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:08.507711   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:08.507711   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:08.511937   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:08.512056   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:08.512056   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:08 GMT
	I1014 08:47:08.512056   15224 round_trippers.go:580]     Audit-Id: e8ca3373-f077-463a-b9d5-c452fab90974
	I1014 08:47:08.512184   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:08.512184   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:08.512184   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:08.512184   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:08.512334   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:08.513386   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:08.513386   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:08.513386   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:08.513386   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:08.519452   15224 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:47:08.519452   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:08.519452   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:08.519452   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:08 GMT
	I1014 08:47:08.519452   15224 round_trippers.go:580]     Audit-Id: ced7667e-6c66-46a4-9853-0a33235d155d
	I1014 08:47:08.519452   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:08.519452   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:08.519452   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:08.519452   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:08.520308   15224 pod_ready.go:103] pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace has status "Ready":"False"
	I1014 08:47:09.006595   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:09.006595   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:09.006595   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:09.006595   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:09.011936   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:09.011988   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:09.011988   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:09.011988   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:09.011988   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:09.011988   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:09.011988   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:09 GMT
	I1014 08:47:09.011988   15224 round_trippers.go:580]     Audit-Id: c484f081-83c1-4be0-adc2-7aa92f0a3dc6
	I1014 08:47:09.011988   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:09.012716   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:09.012716   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:09.012716   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:09.012716   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:09.015695   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:09.015695   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:09.015695   15224 round_trippers.go:580]     Audit-Id: 8bd75b39-2234-4b8b-9013-177a866df8eb
	I1014 08:47:09.015695   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:09.015817   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:09.015817   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:09.015817   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:09.015817   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:09 GMT
	I1014 08:47:09.015989   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:09.506435   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:09.506435   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:09.506435   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:09.506435   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:09.513063   15224 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:47:09.513145   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:09.513145   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:09.513145   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:09 GMT
	I1014 08:47:09.513145   15224 round_trippers.go:580]     Audit-Id: 06f28490-94ff-40dc-99bb-5cb85f73a931
	I1014 08:47:09.513280   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:09.513280   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:09.513280   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:09.513662   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:09.514569   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:09.514664   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:09.514664   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:09.514664   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:09.518160   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:09.518160   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:09.518160   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:09 GMT
	I1014 08:47:09.518160   15224 round_trippers.go:580]     Audit-Id: c6525d6e-9818-4d1a-90c6-7806fadb3ce2
	I1014 08:47:09.518160   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:09.518470   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:09.518540   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:09.518540   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:09.518827   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:10.006970   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:10.006970   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:10.006970   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:10.006970   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:10.011008   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:10.011088   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:10.011088   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:10.011088   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:10.011088   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:10 GMT
	I1014 08:47:10.011088   15224 round_trippers.go:580]     Audit-Id: b2e32801-8b13-4cf7-b163-91bc80134065
	I1014 08:47:10.011088   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:10.011088   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:10.011350   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:10.012213   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:10.012293   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:10.012293   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:10.012293   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:10.013972   15224 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1014 08:47:10.014713   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:10.014713   15224 round_trippers.go:580]     Audit-Id: c48f83b8-1463-4288-81b2-41209700f82a
	I1014 08:47:10.014713   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:10.014713   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:10.014713   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:10.014713   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:10.014713   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:10 GMT
	I1014 08:47:10.015098   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:10.506538   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:10.506538   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:10.506538   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:10.506538   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:10.511110   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:10.511178   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:10.511178   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:10.511178   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:10 GMT
	I1014 08:47:10.511178   15224 round_trippers.go:580]     Audit-Id: b6155baf-4833-4d66-8844-2fad966eab08
	I1014 08:47:10.511178   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:10.511178   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:10.511178   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:10.511486   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:10.512394   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:10.512394   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:10.512394   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:10.512394   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:10.515291   15224 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1014 08:47:10.515335   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:10.515335   15224 round_trippers.go:580]     Audit-Id: c8c32713-140e-4e16-b02c-5eb874c6be6c
	I1014 08:47:10.515335   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:10.515335   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:10.515382   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:10.515382   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:10.515382   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:10 GMT
	I1014 08:47:10.515723   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:11.006435   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:11.006435   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:11.006435   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:11.006435   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:11.011127   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:11.011127   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:11.011127   15224 round_trippers.go:580]     Audit-Id: 9c151fd3-37d0-4147-b731-61c1b378ba84
	I1014 08:47:11.011127   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:11.011127   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:11.011127   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:11.011127   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:11.011127   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:11 GMT
	I1014 08:47:11.011361   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:11.012761   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:11.012761   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:11.012946   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:11.012946   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:11.015307   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:11.015307   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:11.015307   15224 round_trippers.go:580]     Audit-Id: db06ee2f-d639-4669-91eb-2360749abc27
	I1014 08:47:11.015307   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:11.015307   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:11.015532   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:11.015532   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:11.015532   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:11 GMT
	I1014 08:47:11.015860   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:11.016241   15224 pod_ready.go:103] pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace has status "Ready":"False"
	I1014 08:47:11.507392   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:11.507392   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:11.507392   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:11.507392   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:11.512533   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:11.512620   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:11.512620   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:11.512620   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:11 GMT
	I1014 08:47:11.512620   15224 round_trippers.go:580]     Audit-Id: bac9f82c-315d-440c-8078-0b0e4a0ee41c
	I1014 08:47:11.512620   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:11.512620   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:11.512620   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:11.513007   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:11.513722   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:11.513722   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:11.513722   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:11.513722   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:11.518444   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:11.518444   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:11.518444   15224 round_trippers.go:580]     Audit-Id: c838a67a-b881-43ce-ad4f-6a87e4c89a4c
	I1014 08:47:11.518444   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:11.518444   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:11.518444   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:11.518444   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:11.518444   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:11 GMT
	I1014 08:47:11.518444   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:12.006582   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:12.006582   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:12.007326   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:12.007326   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:12.012393   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:12.012393   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:12.012393   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:12.012393   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:12.012393   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:12 GMT
	I1014 08:47:12.012393   15224 round_trippers.go:580]     Audit-Id: 34e82f0d-ec81-4dee-bea3-e36516aa5f0d
	I1014 08:47:12.012393   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:12.012393   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:12.012721   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:12.013553   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:12.013553   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:12.013651   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:12.013651   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:12.017785   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:12.017785   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:12.017785   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:12.017785   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:12.017785   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:12 GMT
	I1014 08:47:12.017785   15224 round_trippers.go:580]     Audit-Id: b2eb7e49-fe42-4555-b1bf-879ffb0ae3ba
	I1014 08:47:12.017785   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:12.017785   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:12.018477   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:12.507323   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:12.507423   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:12.507423   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:12.507423   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:12.512108   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:12.512207   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:12.512207   15224 round_trippers.go:580]     Audit-Id: 546f5a31-2c3c-47ff-8a44-f2843dce4a5e
	I1014 08:47:12.512207   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:12.512207   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:12.512207   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:12.512347   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:12.512424   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:12 GMT
	I1014 08:47:12.512488   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:12.513507   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:12.513568   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:12.513568   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:12.513568   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:12.517013   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:12.517142   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:12.517142   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:12 GMT
	I1014 08:47:12.517142   15224 round_trippers.go:580]     Audit-Id: 26e64686-e758-4892-81fd-55e324997e47
	I1014 08:47:12.517142   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:12.517142   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:12.517142   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:12.517142   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:12.517437   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:13.006520   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:13.006520   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:13.006520   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:13.006520   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:13.010891   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:13.010891   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:13.010891   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:13.010891   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:13.010891   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:13 GMT
	I1014 08:47:13.010891   15224 round_trippers.go:580]     Audit-Id: 6a9084a1-6c4c-4c8d-99d6-65142ce35539
	I1014 08:47:13.010891   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:13.010891   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:13.011275   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:13.012142   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:13.012246   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:13.012246   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:13.012246   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:13.014315   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:13.015307   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:13.015307   15224 round_trippers.go:580]     Audit-Id: f4309c0f-831d-4770-9f1e-e131d7b0f9b4
	I1014 08:47:13.015307   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:13.015307   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:13.015307   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:13.015307   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:13.015307   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:13 GMT
	I1014 08:47:13.015481   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:13.016424   15224 pod_ready.go:103] pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace has status "Ready":"False"
	I1014 08:47:13.506417   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:13.507223   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:13.507223   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:13.507223   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:13.511693   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:13.511817   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:13.511817   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:13.511817   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:13.511817   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:13 GMT
	I1014 08:47:13.511817   15224 round_trippers.go:580]     Audit-Id: 7fad4c8b-b30e-498b-9ec4-5606a8ade29c
	I1014 08:47:13.511817   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:13.511899   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:13.512089   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:13.513074   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:13.513146   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:13.513146   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:13.513146   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:13.516684   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:13.516684   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:13.516684   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:13 GMT
	I1014 08:47:13.516684   15224 round_trippers.go:580]     Audit-Id: 8f255108-f0af-4a99-9143-fb21a67f899e
	I1014 08:47:13.516684   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:13.516684   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:13.516684   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:13.516684   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:13.516999   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:14.007413   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:14.007413   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:14.007413   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:14.007413   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:14.012790   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:14.012790   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:14.012790   15224 round_trippers.go:580]     Audit-Id: 78f8ee80-1015-40f0-97f5-0981b36b4386
	I1014 08:47:14.012790   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:14.012790   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:14.012790   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:14.012790   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:14.012790   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:14 GMT
	I1014 08:47:14.012790   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:14.014159   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:14.014159   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:14.014159   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:14.014159   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:14.017212   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:14.017212   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:14.017212   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:14.017212   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:14 GMT
	I1014 08:47:14.017212   15224 round_trippers.go:580]     Audit-Id: c14f110c-e4e6-46fd-8da0-9ac9a8be1e50
	I1014 08:47:14.017212   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:14.017212   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:14.017212   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:14.017212   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:14.507162   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:14.507162   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:14.507162   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:14.507162   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:14.513064   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:14.513064   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:14.513064   15224 round_trippers.go:580]     Audit-Id: 62f7ba62-de7d-4c73-84ad-30979148efb0
	I1014 08:47:14.513064   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:14.513170   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:14.513170   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:14.513170   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:14.513170   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:14 GMT
	I1014 08:47:14.513471   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:14.514151   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:14.514151   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:14.514337   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:14.514337   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:14.519788   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:14.519788   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:14.519788   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:14 GMT
	I1014 08:47:14.519788   15224 round_trippers.go:580]     Audit-Id: a51cfb8e-c4d2-4f0d-b13a-26340550ffa1
	I1014 08:47:14.519892   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:14.519892   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:14.519919   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:14.519950   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:14.519950   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:15.007661   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:15.007661   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:15.007661   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:15.007661   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:15.012074   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:15.012074   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:15.012074   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:15.012074   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:15 GMT
	I1014 08:47:15.012074   15224 round_trippers.go:580]     Audit-Id: baed5bb1-35d7-47bb-86e2-0aa2501c8146
	I1014 08:47:15.012074   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:15.012074   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:15.012074   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:15.012380   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:15.013197   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:15.013197   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:15.013362   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:15.013362   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:15.018628   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:15.018628   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:15.018628   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:15.018628   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:15.018628   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:15.018628   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:15 GMT
	I1014 08:47:15.018628   15224 round_trippers.go:580]     Audit-Id: b8ca27a2-03fc-477b-a15a-95e8a6c0c70d
	I1014 08:47:15.018628   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:15.018628   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:15.019476   15224 pod_ready.go:103] pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace has status "Ready":"False"
	I1014 08:47:15.506477   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:15.506477   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:15.506477   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:15.506477   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:15.511166   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:15.511166   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:15.511166   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:15.511307   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:15.511307   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:15.511307   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:15 GMT
	I1014 08:47:15.511307   15224 round_trippers.go:580]     Audit-Id: e916cecb-826d-4354-af10-9cabb28bd69a
	I1014 08:47:15.511307   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:15.511451   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:15.511892   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:15.511892   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:15.511892   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:15.511892   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:15.520011   15224 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 08:47:15.520046   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:15.520105   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:15.520105   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:15 GMT
	I1014 08:47:15.520105   15224 round_trippers.go:580]     Audit-Id: 33f64d34-16aa-4a43-967b-b926d5f98321
	I1014 08:47:15.520105   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:15.520105   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:15.520105   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:15.520105   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:16.006631   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:16.006631   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:16.006631   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:16.006631   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:16.011620   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:16.011620   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:16.011620   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:16 GMT
	I1014 08:47:16.011620   15224 round_trippers.go:580]     Audit-Id: 31e70c7d-5221-43ea-8127-e8536f18b112
	I1014 08:47:16.011742   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:16.011742   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:16.011742   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:16.011742   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:16.011957   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:16.013142   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:16.013248   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:16.013248   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:16.013248   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:16.015444   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:16.015444   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:16.016004   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:16.016004   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:16.016004   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:16 GMT
	I1014 08:47:16.016004   15224 round_trippers.go:580]     Audit-Id: 0d2c6a23-82eb-4de2-a65b-9c5ccd41bf3c
	I1014 08:47:16.016004   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:16.016004   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:16.016488   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:16.508222   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:16.508222   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:16.508222   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:16.508222   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:16.513132   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:16.513276   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:16.513276   15224 round_trippers.go:580]     Audit-Id: 92cefe22-a976-4d40-8ef5-1fd0de7c9281
	I1014 08:47:16.513379   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:16.513379   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:16.513379   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:16.513379   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:16.513422   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:16 GMT
	I1014 08:47:16.513422   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:16.514218   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:16.514218   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:16.514218   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:16.514218   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:16.520652   15224 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:47:16.520652   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:16.520652   15224 round_trippers.go:580]     Audit-Id: 9f45b35e-3d7b-48c1-8226-c0835e02ceb7
	I1014 08:47:16.520652   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:16.520652   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:16.520652   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:16.520652   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:16.520652   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:16 GMT
	I1014 08:47:16.520811   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:17.006825   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:17.006825   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:17.006825   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:17.006825   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:17.010058   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:17.011080   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:17.011133   15224 round_trippers.go:580]     Audit-Id: 18db3927-457f-4476-a646-5e92011f1be4
	I1014 08:47:17.011133   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:17.011133   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:17.011133   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:17.011133   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:17.011133   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:17 GMT
	I1014 08:47:17.011413   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:17.012320   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:17.012320   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:17.012320   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:17.012320   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:17.014796   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:17.014796   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:17.014796   15224 round_trippers.go:580]     Audit-Id: 696cd3c8-b82b-4fbd-bc5f-e9c53892286d
	I1014 08:47:17.015321   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:17.015321   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:17.015321   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:17.015321   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:17.015321   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:17 GMT
	I1014 08:47:17.015614   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:17.507109   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:17.507232   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:17.507232   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:17.507232   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:17.511533   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:17.511533   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:17.511533   15224 round_trippers.go:580]     Audit-Id: f7633e99-25ed-4fd3-8d31-8bd181530254
	I1014 08:47:17.511533   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:17.511657   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:17.511657   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:17.511657   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:17.511657   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:17 GMT
	I1014 08:47:17.511965   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:17.512698   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:17.512825   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:17.512825   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:17.512825   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:17.515118   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:17.515118   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:17.515118   15224 round_trippers.go:580]     Audit-Id: 7cd020d0-aead-43af-8a37-482b65d69e01
	I1014 08:47:17.516068   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:17.516068   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:17.516068   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:17.516068   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:17.516068   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:17 GMT
	I1014 08:47:17.516560   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:17.517068   15224 pod_ready.go:103] pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace has status "Ready":"False"
	I1014 08:47:18.007225   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:18.007885   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:18.007885   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:18.007885   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:18.011998   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:18.011998   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:18.011998   15224 round_trippers.go:580]     Audit-Id: 9a8c58e9-7460-4b9e-9090-2e9a6e238080
	I1014 08:47:18.011998   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:18.012143   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:18.012143   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:18.012143   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:18.012143   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:18 GMT
	I1014 08:47:18.012383   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:18.013224   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:18.013224   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:18.013305   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:18.013305   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:18.019345   15224 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:47:18.019345   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:18.019345   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:18.019345   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:18.019345   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:18.019345   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:18.019345   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:18 GMT
	I1014 08:47:18.019345   15224 round_trippers.go:580]     Audit-Id: 7c2d2856-b25d-4da1-91f1-53542397cdf3
	I1014 08:47:18.019345   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:18.507458   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:18.507554   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:18.507554   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:18.507554   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:18.514851   15224 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 08:47:18.514851   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:18.514851   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:18.514851   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:18.514851   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:18.514851   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:18.514851   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:18 GMT
	I1014 08:47:18.514851   15224 round_trippers.go:580]     Audit-Id: cdaf01ce-867d-4c99-bc82-f89879c71827
	I1014 08:47:18.514851   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:18.514851   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:18.514851   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:18.514851   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:18.514851   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:18.518408   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:18.518408   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:18.518408   15224 round_trippers.go:580]     Audit-Id: d74c1796-3bbd-4b83-974c-c1a06e450acf
	I1014 08:47:18.518408   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:18.518408   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:18.518408   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:18.518408   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:18.518408   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:18 GMT
	I1014 08:47:18.518408   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:19.007253   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:19.007253   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:19.007253   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:19.007253   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:19.011995   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:19.012103   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:19.012182   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:19.012182   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:19.012182   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:19.012182   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:19 GMT
	I1014 08:47:19.012182   15224 round_trippers.go:580]     Audit-Id: 6255e6ac-a168-4219-a867-de075f16566a
	I1014 08:47:19.012182   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:19.012417   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:19.013173   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:19.013173   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:19.013173   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:19.013253   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:19.020207   15224 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:47:19.020207   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:19.020207   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:19 GMT
	I1014 08:47:19.020207   15224 round_trippers.go:580]     Audit-Id: 91df407e-0025-4337-a011-0fa51e27fecd
	I1014 08:47:19.020303   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:19.020303   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:19.020303   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:19.020303   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:19.020625   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:19.507671   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:19.507755   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:19.507755   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:19.507755   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:19.512040   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:19.512137   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:19.512137   15224 round_trippers.go:580]     Audit-Id: fb270a46-2a7d-4c0d-94be-b49625fa56f8
	I1014 08:47:19.512137   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:19.512137   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:19.512137   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:19.512304   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:19.512304   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:19 GMT
	I1014 08:47:19.512447   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:19.513603   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:19.513920   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:19.514041   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:19.514041   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:19.518839   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:19.518886   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:19.518886   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:19.518886   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:19.518886   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:19.518886   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:19.518886   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:19 GMT
	I1014 08:47:19.518886   15224 round_trippers.go:580]     Audit-Id: 5b4168cc-f28b-4b94-93da-8a92474c4810
	I1014 08:47:19.518886   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:19.519590   15224 pod_ready.go:103] pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace has status "Ready":"False"
	I1014 08:47:20.006817   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:20.006817   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:20.006817   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:20.006817   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:20.011443   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:20.011543   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:20.011543   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:20.011543   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:20.011543   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:20.011635   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:20.011635   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:20 GMT
	I1014 08:47:20.011635   15224 round_trippers.go:580]     Audit-Id: 44719273-604e-405a-b11e-01c0640de86a
	I1014 08:47:20.011635   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:20.012486   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:20.012486   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:20.012574   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:20.012574   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:20.015859   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:20.016039   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:20.016039   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:20 GMT
	I1014 08:47:20.016039   15224 round_trippers.go:580]     Audit-Id: 4d6b5e39-ffe1-446f-9477-0545b059dffb
	I1014 08:47:20.016039   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:20.016039   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:20.016103   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:20.016103   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:20.016497   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:20.506678   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:20.506678   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:20.506678   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:20.506678   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:20.510833   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:20.511227   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:20.511304   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:20.511304   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:20.511304   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:20 GMT
	I1014 08:47:20.511336   15224 round_trippers.go:580]     Audit-Id: 68f65e29-6ed5-4c5b-8c44-0f46bd80731c
	I1014 08:47:20.511336   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:20.511336   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:20.511336   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1828","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I1014 08:47:20.512129   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:20.512129   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:20.512129   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:20.512129   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:20.515890   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:20.515890   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:20.515890   15224 round_trippers.go:580]     Audit-Id: e481aaa4-6960-411e-89e9-1e9183156bb6
	I1014 08:47:20.515890   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:20.515890   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:20.515890   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:20.515890   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:20.515890   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:20 GMT
	I1014 08:47:20.516719   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:21.007016   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:21.007016   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:21.007016   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:21.007016   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:21.010237   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:21.010237   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:21.011027   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:21.011027   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:21 GMT
	I1014 08:47:21.011027   15224 round_trippers.go:580]     Audit-Id: 47b87774-7090-483b-9926-39ccad3716e5
	I1014 08:47:21.011027   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:21.011027   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:21.011027   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:21.011186   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1996","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7275 chars]
	I1014 08:47:21.012374   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:21.012422   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:21.012458   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:21.012458   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:21.032758   15224 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I1014 08:47:21.032758   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:21.032758   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:21.032758   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:21.032758   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:21.032758   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:21.032758   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:21 GMT
	I1014 08:47:21.032758   15224 round_trippers.go:580]     Audit-Id: 1571e1d2-fe3c-4d0f-9e64-214a36f36698
	I1014 08:47:21.032758   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:21.507257   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:21.507257   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:21.507257   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:21.507257   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:21.511829   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:21.511904   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:21.511904   15224 round_trippers.go:580]     Audit-Id: 907c98fb-199a-4b1f-befc-1db794bb880e
	I1014 08:47:21.511904   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:21.511904   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:21.511904   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:21.511904   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:21.511904   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:21 GMT
	I1014 08:47:21.512515   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1996","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7275 chars]
	I1014 08:47:21.513885   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:21.513885   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:21.513885   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:21.513885   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:21.517247   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:21.517299   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:21.517299   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:21 GMT
	I1014 08:47:21.517299   15224 round_trippers.go:580]     Audit-Id: ef88c98f-83a7-4328-a949-cc9eb37c3d0e
	I1014 08:47:21.517299   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:21.517299   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:21.517299   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:21.517388   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:21.517692   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:22.007412   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:22.007412   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:22.007412   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:22.007412   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:22.012763   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:22.012900   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:22.012900   15224 round_trippers.go:580]     Audit-Id: 45455627-5f1f-4676-8d3e-5703470425b1
	I1014 08:47:22.012900   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:22.012900   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:22.012900   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:22.012900   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:22.012900   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:22 GMT
	I1014 08:47:22.013364   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"1996","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7275 chars]
	I1014 08:47:22.014066   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:22.014066   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:22.014066   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:22.014066   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:22.017535   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:22.017535   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:22.017535   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:22.017649   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:22.017649   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:22.017672   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:22 GMT
	I1014 08:47:22.017672   15224 round_trippers.go:580]     Audit-Id: 48e1e5e9-abf3-497d-a3da-5e0cec144c2c
	I1014 08:47:22.017672   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:22.017807   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:22.018437   15224 pod_ready.go:103] pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace has status "Ready":"False"
	I1014 08:47:22.507392   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-fs9ct
	I1014 08:47:22.507529   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:22.507529   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:22.507529   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:22.516406   15224 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1014 08:47:22.516406   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:22.516406   15224 round_trippers.go:580]     Audit-Id: a7b5ccb8-bb0d-4772-8475-adc01a709731
	I1014 08:47:22.516406   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:22.516406   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:22.516406   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:22.516406   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:22.516406   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:22 GMT
	I1014 08:47:22.516406   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"2001","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7046 chars]
	I1014 08:47:22.517152   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:22.517152   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:22.517152   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:22.517152   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:22.521970   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:22.521970   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:22.521970   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:22.521970   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:22.521970   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:22.521970   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:22 GMT
	I1014 08:47:22.522111   15224 round_trippers.go:580]     Audit-Id: 5ecc5f15-8d19-4bc2-9de5-40d471223401
	I1014 08:47:22.522111   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:22.522336   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:22.522494   15224 pod_ready.go:93] pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace has status "Ready":"True"
	I1014 08:47:22.522494   15224 pod_ready.go:82] duration metric: took 22.5161315s for pod "coredns-7c65d6cfc9-fs9ct" in "kube-system" namespace to be "Ready" ...
	I1014 08:47:22.522494   15224 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:47:22.522494   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-671000
	I1014 08:47:22.522494   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:22.522494   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:22.522494   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:22.526482   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:22.526482   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:22.526482   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:22.526482   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:22 GMT
	I1014 08:47:22.526482   15224 round_trippers.go:580]     Audit-Id: 6d042254-f6fa-4858-8857-13aec94cb0f3
	I1014 08:47:22.526482   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:22.526482   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:22.526482   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:22.526482   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-671000","namespace":"kube-system","uid":"098aece2-cb2c-470a-878a-872417e4387f","resourceVersion":"1933","creationTimestamp":"2024-10-14T15:46:17Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.20.106.123:2379","kubernetes.io/config.hash":"679486a8f27de5805bc2e87fb1920dce","kubernetes.io/config.mirror":"679486a8f27de5805bc2e87fb1920dce","kubernetes.io/config.seen":"2024-10-14T15:46:09.843414705Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:46:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6617 chars]
	I1014 08:47:22.527519   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:22.527695   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:22.527695   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:22.527767   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:22.529858   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:22.530635   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:22.530635   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:22.530635   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:22 GMT
	I1014 08:47:22.530635   15224 round_trippers.go:580]     Audit-Id: fc7b79b1-3b10-4b82-ab77-14c76b0685e4
	I1014 08:47:22.530635   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:22.530635   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:22.530635   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:22.530952   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:22.531075   15224 pod_ready.go:93] pod "etcd-multinode-671000" in "kube-system" namespace has status "Ready":"True"
	I1014 08:47:22.531075   15224 pod_ready.go:82] duration metric: took 8.5807ms for pod "etcd-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:47:22.531075   15224 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:47:22.531075   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-671000
	I1014 08:47:22.531075   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:22.531075   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:22.531075   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:22.537967   15224 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 08:47:22.537967   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:22.537967   15224 round_trippers.go:580]     Audit-Id: 077fd83a-3418-4449-8a48-818e72fe3586
	I1014 08:47:22.537967   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:22.537967   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:22.537967   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:22.537967   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:22.537967   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:22 GMT
	I1014 08:47:22.538728   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-671000","namespace":"kube-system","uid":"64595feb-e6e8-4e69-a4b7-6459d15e3beb","resourceVersion":"1925","creationTimestamp":"2024-10-14T15:46:15Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.20.106.123:8443","kubernetes.io/config.hash":"864b69f35cb25e9dd5d87a753a055a10","kubernetes.io/config.mirror":"864b69f35cb25e9dd5d87a753a055a10","kubernetes.io/config.seen":"2024-10-14T15:46:09.765946769Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:46:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8049 chars]
	I1014 08:47:22.539331   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:22.539331   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:22.539331   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:22.539448   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:22.542179   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:22.542179   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:22.542179   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:22.542179   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:22 GMT
	I1014 08:47:22.542179   15224 round_trippers.go:580]     Audit-Id: c31189b1-3c1e-414b-9f82-d770e359bde5
	I1014 08:47:22.542179   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:22.542179   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:22.542179   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:22.542179   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:22.543224   15224 pod_ready.go:93] pod "kube-apiserver-multinode-671000" in "kube-system" namespace has status "Ready":"True"
	I1014 08:47:22.543349   15224 pod_ready.go:82] duration metric: took 12.236ms for pod "kube-apiserver-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:47:22.543349   15224 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:47:22.543439   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-671000
	I1014 08:47:22.543493   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:22.543529   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:22.543529   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:22.545619   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:22.545619   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:22.545619   15224 round_trippers.go:580]     Audit-Id: 4cfbb807-6017-4c01-87de-fdc47bd6c8d1
	I1014 08:47:22.545619   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:22.545619   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:22.545619   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:22.545619   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:22.545619   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:22 GMT
	I1014 08:47:22.545619   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-671000","namespace":"kube-system","uid":"a5c7bb80-c844-476f-ba47-1cd4e599b92d","resourceVersion":"1940","creationTimestamp":"2024-10-14T15:22:39Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b83e31864e0ae98d29d960866012ecb0","kubernetes.io/config.mirror":"b83e31864e0ae98d29d960866012ecb0","kubernetes.io/config.seen":"2024-10-14T15:22:39.775213119Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I1014 08:47:22.546767   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:22.546767   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:22.546767   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:22.546767   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:22.549114   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:22.549114   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:22.549114   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:22 GMT
	I1014 08:47:22.549114   15224 round_trippers.go:580]     Audit-Id: 8bfc765a-f250-4c33-9183-130700d1b585
	I1014 08:47:22.549114   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:22.549114   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:22.549114   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:22.549114   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:22.549114   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:22.549905   15224 pod_ready.go:93] pod "kube-controller-manager-multinode-671000" in "kube-system" namespace has status "Ready":"True"
	I1014 08:47:22.549941   15224 pod_ready.go:82] duration metric: took 6.5917ms for pod "kube-controller-manager-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:47:22.549941   15224 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kbpjf" in "kube-system" namespace to be "Ready" ...
	I1014 08:47:22.550056   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kbpjf
	I1014 08:47:22.550124   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:22.550124   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:22.550214   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:22.553070   15224 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1014 08:47:22.553070   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:22.553070   15224 round_trippers.go:580]     Audit-Id: c22a969d-5aec-4108-8f1c-d075493f0a49
	I1014 08:47:22.553070   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:22.553070   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:22.553070   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:22.553070   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:22.553070   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:22 GMT
	I1014 08:47:22.553070   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kbpjf","generateName":"kube-proxy-","namespace":"kube-system","uid":"004b7f38-fa3b-4c2c-9524-8d5b1ba514e9","resourceVersion":"1803","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e5217416-1ed2-4ea5-ae9b-ee7bcdba0d2a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e5217416-1ed2-4ea5-ae9b-ee7bcdba0d2a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6433 chars]
	I1014 08:47:22.554039   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000-m02
	I1014 08:47:22.554188   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:22.554188   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:22.554188   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:22.556365   15224 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 08:47:22.556884   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:22.556884   15224 round_trippers.go:580]     Audit-Id: e8490da7-4e4d-46a3-9830-9c188b304e0b
	I1014 08:47:22.556884   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:22.556884   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:22.556884   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:22.556884   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:22.556884   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:22 GMT
	I1014 08:47:22.557229   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m02","uid":"6883c70a-cd16-4441-905e-912bed6d2b2c","resourceVersion":"1990","creationTimestamp":"2024-10-14T15:25:49Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_25_50_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:25:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4584 chars]
	I1014 08:47:22.557495   15224 pod_ready.go:98] node "multinode-671000-m02" hosting pod "kube-proxy-kbpjf" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000-m02" has status "Ready":"Unknown"
	I1014 08:47:22.557495   15224 pod_ready.go:82] duration metric: took 7.5537ms for pod "kube-proxy-kbpjf" in "kube-system" namespace to be "Ready" ...
	E1014 08:47:22.557495   15224 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-671000-m02" hosting pod "kube-proxy-kbpjf" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000-m02" has status "Ready":"Unknown"
	I1014 08:47:22.557495   15224 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n6txs" in "kube-system" namespace to be "Ready" ...
	I1014 08:47:22.707761   15224 request.go:632] Waited for 150.266ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n6txs
	I1014 08:47:22.707761   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n6txs
	I1014 08:47:22.707761   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:22.708100   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:22.708100   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:22.712025   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:22.712025   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:22.712142   15224 round_trippers.go:580]     Audit-Id: e0a449eb-bf2a-481c-8ccf-efed27df1b24
	I1014 08:47:22.712142   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:22.712142   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:22.712142   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:22.712142   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:22.712142   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:22 GMT
	I1014 08:47:22.712487   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-n6txs","generateName":"kube-proxy-","namespace":"kube-system","uid":"796a44f9-2067-438d-9359-34d5f968c861","resourceVersion":"1784","creationTimestamp":"2024-10-14T15:30:35Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e5217416-1ed2-4ea5-ae9b-ee7bcdba0d2a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:30:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e5217416-1ed2-4ea5-ae9b-ee7bcdba0d2a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6428 chars]
	I1014 08:47:22.907393   15224 request.go:632] Waited for 194.0438ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.106.123:8443/api/v1/nodes/multinode-671000-m03
	I1014 08:47:22.907393   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000-m03
	I1014 08:47:22.907926   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:22.907926   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:22.907926   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:22.911879   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:22.911879   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:22.911879   15224 round_trippers.go:580]     Audit-Id: 440f6872-d332-4c7f-a3b4-eed3ef19f870
	I1014 08:47:22.911879   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:22.911879   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:22.911879   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:22.911879   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:22.911879   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:22 GMT
	I1014 08:47:22.912251   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000-m03","uid":"a7ea02fb-ac24-4430-adbc-9815c644cfa0","resourceVersion":"1897","creationTimestamp":"2024-10-14T15:41:34Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_10_14T08_41_35_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:41:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4303 chars]
	I1014 08:47:22.912794   15224 pod_ready.go:98] node "multinode-671000-m03" hosting pod "kube-proxy-n6txs" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000-m03" has status "Ready":"Unknown"
	I1014 08:47:22.912818   15224 pod_ready.go:82] duration metric: took 355.3229ms for pod "kube-proxy-n6txs" in "kube-system" namespace to be "Ready" ...
	E1014 08:47:22.912818   15224 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-671000-m03" hosting pod "kube-proxy-n6txs" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-671000-m03" has status "Ready":"Unknown"
	I1014 08:47:22.912934   15224 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-r74dx" in "kube-system" namespace to be "Ready" ...
	I1014 08:47:23.107157   15224 request.go:632] Waited for 194.1465ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r74dx
	I1014 08:47:23.107157   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r74dx
	I1014 08:47:23.107157   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:23.107157   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:23.107157   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:23.112683   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:23.112683   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:23.112775   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:23.112775   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:23.112775   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:23 GMT
	I1014 08:47:23.112775   15224 round_trippers.go:580]     Audit-Id: a1b45df5-6598-4da5-9b3a-6a888f71aa39
	I1014 08:47:23.112775   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:23.112775   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:23.113207   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-r74dx","generateName":"kube-proxy-","namespace":"kube-system","uid":"f8d14473-8859-4015-84e9-d00656cc00c9","resourceVersion":"1856","creationTimestamp":"2024-10-14T15:22:44Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"e5217416-1ed2-4ea5-ae9b-ee7bcdba0d2a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e5217416-1ed2-4ea5-ae9b-ee7bcdba0d2a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6405 chars]
	I1014 08:47:23.307122   15224 request.go:632] Waited for 193.0613ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:23.307122   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:23.307122   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:23.307122   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:23.307122   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:23.311228   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:23.312017   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:23.312017   15224 round_trippers.go:580]     Audit-Id: 48494c0d-e599-4956-8dd2-f606bb5be182
	I1014 08:47:23.312119   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:23.312200   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:23.312200   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:23.312230   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:23.312230   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:23 GMT
	I1014 08:47:23.312503   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:23.313162   15224 pod_ready.go:93] pod "kube-proxy-r74dx" in "kube-system" namespace has status "Ready":"True"
	I1014 08:47:23.313162   15224 pod_ready.go:82] duration metric: took 400.2274ms for pod "kube-proxy-r74dx" in "kube-system" namespace to be "Ready" ...
	I1014 08:47:23.313282   15224 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:47:23.507648   15224 request.go:632] Waited for 194.2842ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-671000
	I1014 08:47:23.507648   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-671000
	I1014 08:47:23.507648   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:23.507648   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:23.508208   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:23.512073   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:23.512138   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:23.512138   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:23 GMT
	I1014 08:47:23.512138   15224 round_trippers.go:580]     Audit-Id: 27b859b4-8ea0-4405-86d2-b7f06931ee6d
	I1014 08:47:23.512138   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:23.512138   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:23.512138   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:23.512138   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:23.512545   15224 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-671000","namespace":"kube-system","uid":"97febcab-f54d-4338-ba7c-2dc5e69b77fc","resourceVersion":"1922","creationTimestamp":"2024-10-14T15:22:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"3e987cfaedc75c39145e8fc131c60c81","kubernetes.io/config.mirror":"3e987cfaedc75c39145e8fc131c60c81","kubernetes.io/config.seen":"2024-10-14T15:22:32.104995089Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I1014 08:47:23.707644   15224 request.go:632] Waited for 194.4339ms due to client-side throttling, not priority and fairness, request: GET:https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:23.708118   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes/multinode-671000
	I1014 08:47:23.708118   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:23.708118   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:23.708118   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:23.712120   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:23.712120   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:23.712231   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:23.712231   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:23.712231   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:23.712231   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:23.712231   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:23 GMT
	I1014 08:47:23.712231   15224 round_trippers.go:580]     Audit-Id: a0546db7-53b1-42b6-82b2-1ddac5257dfc
	I1014 08:47:23.712467   15224 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-10-14T15:22:36Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I1014 08:47:23.713024   15224 pod_ready.go:93] pod "kube-scheduler-multinode-671000" in "kube-system" namespace has status "Ready":"True"
	I1014 08:47:23.713119   15224 pod_ready.go:82] duration metric: took 399.7411ms for pod "kube-scheduler-multinode-671000" in "kube-system" namespace to be "Ready" ...
	I1014 08:47:23.713119   15224 pod_ready.go:39] duration metric: took 23.7174524s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 08:47:23.713185   15224 api_server.go:52] waiting for apiserver process to appear ...
	I1014 08:47:23.722066   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 08:47:23.749304   15224 command_runner.go:130] > a834664fc8b8
	I1014 08:47:23.749420   15224 logs.go:282] 1 containers: [a834664fc8b8]
	I1014 08:47:23.759882   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 08:47:23.786736   15224 command_runner.go:130] > 48c8492e231e
	I1014 08:47:23.786909   15224 logs.go:282] 1 containers: [48c8492e231e]
	I1014 08:47:23.796103   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 08:47:23.820348   15224 command_runner.go:130] > 5d223e2e64fc
	I1014 08:47:23.820902   15224 command_runner.go:130] > d9831e9f8ce8
	I1014 08:47:23.822183   15224 logs.go:282] 2 containers: [5d223e2e64fc d9831e9f8ce8]
	I1014 08:47:23.830412   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 08:47:23.853420   15224 command_runner.go:130] > d428685276e1
	I1014 08:47:23.854031   15224 command_runner.go:130] > 661e75bbf6b4
	I1014 08:47:23.854031   15224 logs.go:282] 2 containers: [d428685276e1 661e75bbf6b4]
	I1014 08:47:23.864582   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 08:47:23.897352   15224 command_runner.go:130] > e83db276dec3
	I1014 08:47:23.897949   15224 command_runner.go:130] > ea19428d7036
	I1014 08:47:23.897949   15224 logs.go:282] 2 containers: [e83db276dec3 ea19428d7036]
	I1014 08:47:23.907963   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 08:47:23.938100   15224 command_runner.go:130] > 8af48c446f7e
	I1014 08:47:23.938100   15224 command_runner.go:130] > 712aad669c9f
	I1014 08:47:23.938229   15224 logs.go:282] 2 containers: [8af48c446f7e 712aad669c9f]
	I1014 08:47:23.951769   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 08:47:23.978763   15224 command_runner.go:130] > bba035362eb9
	I1014 08:47:23.979792   15224 command_runner.go:130] > fcdf89a3ac8c
	I1014 08:47:23.979867   15224 logs.go:282] 2 containers: [bba035362eb9 fcdf89a3ac8c]
	I1014 08:47:23.979990   15224 logs.go:123] Gathering logs for kube-proxy [ea19428d7036] ...
	I1014 08:47:23.980053   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea19428d7036"
	I1014 08:47:24.011154   15224 command_runner.go:130] ! I1014 15:22:47.466748       1 server_linux.go:66] "Using iptables proxy"
	I1014 08:47:24.011220   15224 command_runner.go:130] ! E1014 15:22:47.511018       1 proxier.go:734] "Error cleaning up nftables rules" err=<
	I1014 08:47:24.011220   15224 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I1014 08:47:24.011220   15224 command_runner.go:130] ! 	add table ip kube-proxy
	I1014 08:47:24.011220   15224 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I1014 08:47:24.011220   15224 command_runner.go:130] !  >
	I1014 08:47:24.011220   15224 command_runner.go:130] ! E1014 15:22:47.546236       1 proxier.go:734] "Error cleaning up nftables rules" err=<
	I1014 08:47:24.011220   15224 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I1014 08:47:24.011220   15224 command_runner.go:130] ! 	add table ip6 kube-proxy
	I1014 08:47:24.011220   15224 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I1014 08:47:24.011220   15224 command_runner.go:130] !  >
	I1014 08:47:24.011220   15224 command_runner.go:130] ! I1014 15:22:47.606437       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.20.100.167"]
	I1014 08:47:24.011220   15224 command_runner.go:130] ! E1014 15:22:47.606505       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 08:47:24.011220   15224 command_runner.go:130] ! I1014 15:22:47.788755       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 08:47:24.011220   15224 command_runner.go:130] ! I1014 15:22:47.788820       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 08:47:24.011220   15224 command_runner.go:130] ! I1014 15:22:47.788852       1 server_linux.go:169] "Using iptables Proxier"
	I1014 08:47:24.011220   15224 command_runner.go:130] ! I1014 15:22:47.807050       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 08:47:24.011220   15224 command_runner.go:130] ! I1014 15:22:47.807424       1 server.go:483] "Version info" version="v1.31.1"
	I1014 08:47:24.011220   15224 command_runner.go:130] ! I1014 15:22:47.807440       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:24.011220   15224 command_runner.go:130] ! I1014 15:22:47.810361       1 config.go:199] "Starting service config controller"
	I1014 08:47:24.011220   15224 command_runner.go:130] ! I1014 15:22:47.810416       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 08:47:24.011220   15224 command_runner.go:130] ! I1014 15:22:47.810600       1 config.go:105] "Starting endpoint slice config controller"
	I1014 08:47:24.011220   15224 command_runner.go:130] ! I1014 15:22:47.810629       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 08:47:24.011220   15224 command_runner.go:130] ! I1014 15:22:47.811297       1 config.go:328] "Starting node config controller"
	I1014 08:47:24.011820   15224 command_runner.go:130] ! I1014 15:22:47.811309       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 08:47:24.011820   15224 command_runner.go:130] ! I1014 15:22:47.911443       1 shared_informer.go:320] Caches are synced for node config
	I1014 08:47:24.011820   15224 command_runner.go:130] ! I1014 15:22:47.911791       1 shared_informer.go:320] Caches are synced for service config
	I1014 08:47:24.011820   15224 command_runner.go:130] ! I1014 15:22:47.911866       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 08:47:24.016336   15224 logs.go:123] Gathering logs for kube-controller-manager [8af48c446f7e] ...
	I1014 08:47:24.016336   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af48c446f7e"
	I1014 08:47:24.047920   15224 command_runner.go:130] ! I1014 15:46:12.989235       1 serving.go:386] Generated self-signed cert in-memory
	I1014 08:47:24.048096   15224 command_runner.go:130] ! I1014 15:46:13.820617       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I1014 08:47:24.048096   15224 command_runner.go:130] ! I1014 15:46:13.820897       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:24.048273   15224 command_runner.go:130] ! I1014 15:46:13.823101       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1014 08:47:24.048344   15224 command_runner.go:130] ! I1014 15:46:13.823494       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 08:47:24.048344   15224 command_runner.go:130] ! I1014 15:46:13.824132       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1014 08:47:24.048914   15224 command_runner.go:130] ! I1014 15:46:13.824214       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 08:47:24.048914   15224 command_runner.go:130] ! I1014 15:46:17.208145       1 controllermanager.go:797] "Started controller" controller="serviceaccount-token-controller"
	I1014 08:47:24.049324   15224 command_runner.go:130] ! I1014 15:46:17.211496       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I1014 08:47:24.049324   15224 command_runner.go:130] ! I1014 15:46:17.268813       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I1014 08:47:24.049917   15224 command_runner.go:130] ! I1014 15:46:17.269727       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I1014 08:47:24.050720   15224 command_runner.go:130] ! I1014 15:46:17.270864       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I1014 08:47:24.050720   15224 command_runner.go:130] ! I1014 15:46:17.271094       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I1014 08:47:24.050786   15224 command_runner.go:130] ! I1014 15:46:17.271857       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I1014 08:47:24.050786   15224 command_runner.go:130] ! I1014 15:46:17.271962       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I1014 08:47:24.050786   15224 command_runner.go:130] ! I1014 15:46:17.272049       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I1014 08:47:24.050786   15224 command_runner.go:130] ! I1014 15:46:17.272075       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I1014 08:47:24.050786   15224 command_runner.go:130] ! I1014 15:46:17.273540       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I1014 08:47:24.051348   15224 command_runner.go:130] ! I1014 15:46:17.274245       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I1014 08:47:24.051401   15224 command_runner.go:130] ! I1014 15:46:17.274579       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I1014 08:47:24.051401   15224 command_runner.go:130] ! I1014 15:46:17.274747       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.274772       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.275348       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.275380       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.275397       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.275571       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.275603       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! W1014 15:46:17.275618       1 shared_informer.go:597] resyncPeriod 13h32m18.096579392s is smaller than resyncCheckPeriod 20h55m54.648340273s and the informer has already started. Changing it to 20h55m54.648340273s
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.276096       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.276150       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.276197       1 controllermanager.go:797] "Started controller" controller="resourcequota-controller"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.276213       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.276260       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.276359       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.283642       1 controllermanager.go:797] "Started controller" controller="replicaset-controller"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.284697       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.284913       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.288417       1 controllermanager.go:797] "Started controller" controller="cronjob-controller"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.289073       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.289091       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.292212       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-approving-controller"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.292573       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.292591       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.295276       1 controllermanager.go:797] "Started controller" controller="clusterrole-aggregation-controller"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.295785       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.298756       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I1014 08:47:24.051486   15224 command_runner.go:130] ! I1014 15:46:17.299107       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I1014 08:47:24.052023   15224 command_runner.go:130] ! I1014 15:46:17.299997       1 controllermanager.go:797] "Started controller" controller="endpointslice-mirroring-controller"
	I1014 08:47:24.052023   15224 command_runner.go:130] ! I1014 15:46:17.302040       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I1014 08:47:24.052090   15224 command_runner.go:130] ! I1014 15:46:17.302058       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I1014 08:47:24.052090   15224 command_runner.go:130] ! I1014 15:46:17.305668       1 controllermanager.go:797] "Started controller" controller="replicationcontroller-controller"
	I1014 08:47:24.052090   15224 command_runner.go:130] ! I1014 15:46:17.308801       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I1014 08:47:24.052090   15224 command_runner.go:130] ! I1014 15:46:17.308819       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I1014 08:47:24.052090   15224 command_runner.go:130] ! I1014 15:46:17.318320       1 shared_informer.go:320] Caches are synced for tokens
	I1014 08:47:24.052090   15224 command_runner.go:130] ! I1014 15:46:17.329856       1 controllermanager.go:797] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I1014 08:47:24.052194   15224 command_runner.go:130] ! I1014 15:46:17.330990       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I1014 08:47:24.052194   15224 command_runner.go:130] ! I1014 15:46:17.331395       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I1014 08:47:24.052194   15224 command_runner.go:130] ! I1014 15:46:17.345566       1 controllermanager.go:797] "Started controller" controller="disruption-controller"
	I1014 08:47:24.052268   15224 command_runner.go:130] ! I1014 15:46:17.345806       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I1014 08:47:24.052268   15224 command_runner.go:130] ! I1014 15:46:17.345841       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I1014 08:47:24.052342   15224 command_runner.go:130] ! I1014 15:46:17.345937       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I1014 08:47:24.052342   15224 command_runner.go:130] ! E1014 15:46:17.350088       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I1014 08:47:24.052342   15224 command_runner.go:130] ! I1014 15:46:17.350237       1 controllermanager.go:775] "Warning: skipping controller" controller="service-lb-controller"
	I1014 08:47:24.052342   15224 command_runner.go:130] ! I1014 15:46:17.350277       1 controllermanager.go:775] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I1014 08:47:24.052342   15224 command_runner.go:130] ! I1014 15:46:17.359040       1 controllermanager.go:797] "Started controller" controller="endpoints-controller"
	I1014 08:47:24.052342   15224 command_runner.go:130] ! I1014 15:46:17.360243       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I1014 08:47:24.052342   15224 command_runner.go:130] ! I1014 15:46:17.360265       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I1014 08:47:24.052342   15224 command_runner.go:130] ! I1014 15:46:17.362115       1 controllermanager.go:797] "Started controller" controller="pod-garbage-collector-controller"
	I1014 08:47:24.052466   15224 command_runner.go:130] ! I1014 15:46:17.362235       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I1014 08:47:24.052466   15224 command_runner.go:130] ! I1014 15:46:17.362245       1 shared_informer.go:313] Waiting for caches to sync for GC
	I1014 08:47:24.052466   15224 command_runner.go:130] ! I1014 15:46:17.364537       1 controllermanager.go:797] "Started controller" controller="serviceaccount-controller"
	I1014 08:47:24.052515   15224 command_runner.go:130] ! I1014 15:46:17.364725       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I1014 08:47:24.052515   15224 command_runner.go:130] ! I1014 15:46:17.364738       1 shared_informer.go:313] Waiting for caches to sync for service account
	I1014 08:47:24.052515   15224 command_runner.go:130] ! I1014 15:46:17.367152       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I1014 08:47:24.052515   15224 command_runner.go:130] ! I1014 15:46:17.367373       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I1014 08:47:24.052585   15224 command_runner.go:130] ! I1014 15:46:17.369619       1 controllermanager.go:797] "Started controller" controller="bootstrap-signer-controller"
	I1014 08:47:24.052613   15224 command_runner.go:130] ! I1014 15:46:17.370097       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I1014 08:47:24.052613   15224 command_runner.go:130] ! I1014 15:46:17.373109       1 controllermanager.go:797] "Started controller" controller="token-cleaner-controller"
	I1014 08:47:24.052647   15224 command_runner.go:130] ! I1014 15:46:17.373475       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I1014 08:47:24.053271   15224 command_runner.go:130] ! I1014 15:46:17.373486       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I1014 08:47:24.053271   15224 command_runner.go:130] ! I1014 15:46:17.373493       1 shared_informer.go:320] Caches are synced for token_cleaner
	I1014 08:47:24.053362   15224 command_runner.go:130] ! I1014 15:46:17.375506       1 controllermanager.go:797] "Started controller" controller="ttl-after-finished-controller"
	I1014 08:47:24.053394   15224 command_runner.go:130] ! I1014 15:46:17.375684       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I1014 08:47:24.053394   15224 command_runner.go:130] ! I1014 15:46:17.375694       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I1014 08:47:24.053465   15224 command_runner.go:130] ! I1014 15:46:17.379552       1 controllermanager.go:797] "Started controller" controller="ephemeral-volume-controller"
	I1014 08:47:24.053494   15224 command_runner.go:130] ! I1014 15:46:17.380063       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I1014 08:47:24.053494   15224 command_runner.go:130] ! I1014 15:46:17.380270       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I1014 08:47:24.053561   15224 command_runner.go:130] ! I1014 15:46:17.413079       1 controllermanager.go:797] "Started controller" controller="namespace-controller"
	I1014 08:47:24.053561   15224 command_runner.go:130] ! I1014 15:46:17.413676       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I1014 08:47:24.053561   15224 command_runner.go:130] ! I1014 15:46:17.415689       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I1014 08:47:24.053561   15224 command_runner.go:130] ! I1014 15:46:17.418729       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I1014 08:47:24.053561   15224 command_runner.go:130] ! I1014 15:46:17.418858       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I1014 08:47:24.053561   15224 command_runner.go:130] ! I1014 15:46:17.418983       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:24.053561   15224 command_runner.go:130] ! I1014 15:46:17.420448       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-signing-controller"
	I1014 08:47:24.053561   15224 command_runner.go:130] ! I1014 15:46:17.420573       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I1014 08:47:24.053561   15224 command_runner.go:130] ! I1014 15:46:17.420658       1 controllermanager.go:775] "Warning: skipping controller" controller="node-route-controller"
	I1014 08:47:24.053561   15224 command_runner.go:130] ! I1014 15:46:17.420878       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I1014 08:47:24.053561   15224 command_runner.go:130] ! I1014 15:46:17.422022       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I1014 08:47:24.053561   15224 command_runner.go:130] ! I1014 15:46:17.422169       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I1014 08:47:24.054315   15224 command_runner.go:130] ! I1014 15:46:17.422636       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I1014 08:47:24.054315   15224 command_runner.go:130] ! I1014 15:46:17.425521       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:24.055014   15224 command_runner.go:130] ! I1014 15:46:17.425557       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I1014 08:47:24.055014   15224 command_runner.go:130] ! I1014 15:46:17.425747       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I1014 08:47:24.055091   15224 command_runner.go:130] ! I1014 15:46:17.425569       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:24.055154   15224 command_runner.go:130] ! I1014 15:46:17.425577       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:24.055195   15224 command_runner.go:130] ! E1014 15:46:17.429609       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I1014 08:47:24.055195   15224 command_runner.go:130] ! I1014 15:46:17.429771       1 controllermanager.go:775] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I1014 08:47:24.055195   15224 command_runner.go:130] ! I1014 15:46:17.432720       1 controllermanager.go:797] "Started controller" controller="persistentvolume-attach-detach-controller"
	I1014 08:47:24.055247   15224 command_runner.go:130] ! I1014 15:46:17.433242       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I1014 08:47:24.055247   15224 command_runner.go:130] ! I1014 15:46:17.433509       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I1014 08:47:24.055298   15224 command_runner.go:130] ! I1014 15:46:17.437867       1 controllermanager.go:797] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I1014 08:47:24.055298   15224 command_runner.go:130] ! I1014 15:46:17.438432       1 pvc_protection_controller.go:105] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.438754       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.466996       1 controllermanager.go:797] "Started controller" controller="garbage-collector-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.467178       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.467191       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.467211       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.513974       1 controllermanager.go:797] "Started controller" controller="daemonset-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.514092       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.514103       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.612272       1 controllermanager.go:797] "Started controller" controller="deployment-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.612390       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.612405       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.715625       1 controllermanager.go:797] "Started controller" controller="ttl-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.718491       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.718512       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.762259       1 node_lifecycle_controller.go:430] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.762792       1 controllermanager.go:797] "Started controller" controller="node-lifecycle-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.763108       1 node_lifecycle_controller.go:464] "Sending events to api server" logger="node-lifecycle-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.763488       1 node_lifecycle_controller.go:475] "Starting node controller" logger="node-lifecycle-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.763636       1 shared_informer.go:313] Waiting for caches to sync for taint
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.815269       1 controllermanager.go:797] "Started controller" controller="persistentvolume-expander-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.815926       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.815820       1 expand_controller.go:328] "Starting expand controller" logger="persistentvolume-expander-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.815981       1 shared_informer.go:313] Waiting for caches to sync for expand
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.865803       1 controllermanager.go:797] "Started controller" controller="taint-eviction-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.865833       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.865908       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.865945       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.865986       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I1014 08:47:24.055338   15224 command_runner.go:130] ! I1014 15:46:17.923932       1 controllermanager.go:797] "Started controller" controller="job-controller"
	I1014 08:47:24.056024   15224 command_runner.go:130] ! I1014 15:46:17.924153       1 job_controller.go:226] "Starting job controller" logger="job-controller"
	I1014 08:47:24.056024   15224 command_runner.go:130] ! I1014 15:46:17.924184       1 shared_informer.go:313] Waiting for caches to sync for job
	I1014 08:47:24.056081   15224 command_runner.go:130] ! I1014 15:46:17.978728       1 controllermanager.go:797] "Started controller" controller="root-ca-certificate-publisher-controller"
	I1014 08:47:24.056081   15224 command_runner.go:130] ! I1014 15:46:17.978796       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I1014 08:47:24.056134   15224 command_runner.go:130] ! I1014 15:46:17.978809       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I1014 08:47:24.056162   15224 command_runner.go:130] ! I1014 15:46:18.018003       1 controllermanager.go:797] "Started controller" controller="endpointslice-controller"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.018177       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.018192       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.077409       1 controllermanager.go:797] "Started controller" controller="statefulset-controller"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.078007       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.078026       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.245465       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.246368       1 controllermanager.go:797] "Started controller" controller="node-ipam-controller"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.246712       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.246910       1 shared_informer.go:313] Waiting for caches to sync for node
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.264869       1 controllermanager.go:797] "Started controller" controller="persistentvolume-binder-controller"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.264984       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.266232       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.321121       1 controllermanager.go:797] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.323482       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.323903       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.431796       1 controllermanager.go:797] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.431873       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.465851       1 controllermanager.go:797] "Started controller" controller="persistentvolume-protection-controller"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.468767       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.469028       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.485571       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.534720       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.539015       1 shared_informer.go:320] Caches are synced for PVC protection
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.541399       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000\" does not exist"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.541615       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m02\" does not exist"
	I1014 08:47:24.056196   15224 command_runner.go:130] ! I1014 15:46:18.549102       1 shared_informer.go:320] Caches are synced for TTL
	I1014 08:47:24.056742   15224 command_runner.go:130] ! I1014 15:46:18.549549       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m03\" does not exist"
	I1014 08:47:24.056791   15224 command_runner.go:130] ! I1014 15:46:18.550590       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1014 08:47:24.056791   15224 command_runner.go:130] ! I1014 15:46:18.551387       1 shared_informer.go:320] Caches are synced for node
	I1014 08:47:24.056791   15224 command_runner.go:130] ! I1014 15:46:18.554673       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1014 08:47:24.056791   15224 command_runner.go:130] ! I1014 15:46:18.557592       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1014 08:47:24.056791   15224 command_runner.go:130] ! I1014 15:46:18.558471       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1014 08:47:24.056893   15224 command_runner.go:130] ! I1014 15:46:18.558669       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1014 08:47:24.056893   15224 command_runner.go:130] ! I1014 15:46:18.559066       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:24.056893   15224 command_runner.go:130] ! I1014 15:46:18.559166       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.057121   15224 command_runner.go:130] ! I1014 15:46:18.559144       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.560823       1 shared_informer.go:320] Caches are synced for endpoint
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.563147       1 shared_informer.go:320] Caches are synced for GC
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.566072       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.566447       1 shared_informer.go:320] Caches are synced for persistent volume
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.566267       1 shared_informer.go:320] Caches are synced for service account
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.570369       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.570522       1 shared_informer.go:320] Caches are synced for PV protection
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.577368       1 shared_informer.go:320] Caches are synced for TTL after finished
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.580187       1 shared_informer.go:320] Caches are synced for crt configmap
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.580534       1 shared_informer.go:320] Caches are synced for ephemeral
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.585372       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.593972       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.595014       1 shared_informer.go:320] Caches are synced for cronjob
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.600012       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.602930       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.609680       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.613447       1 shared_informer.go:320] Caches are synced for deployment
	I1014 08:47:24.057271   15224 command_runner.go:130] ! I1014 15:46:18.616246       1 shared_informer.go:320] Caches are synced for expand
	I1014 08:47:24.057808   15224 command_runner.go:130] ! I1014 15:46:18.616739       1 shared_informer.go:320] Caches are synced for namespace
	I1014 08:47:24.057883   15224 command_runner.go:130] ! I1014 15:46:18.618534       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1014 08:47:24.057883   15224 command_runner.go:130] ! I1014 15:46:18.625249       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1014 08:47:24.057883   15224 command_runner.go:130] ! I1014 15:46:18.630423       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1014 08:47:24.057883   15224 command_runner.go:130] ! I1014 15:46:18.632938       1 shared_informer.go:320] Caches are synced for HPA
	I1014 08:47:24.057959   15224 command_runner.go:130] ! I1014 15:46:18.633193       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1014 08:47:24.057959   15224 command_runner.go:130] ! I1014 15:46:18.634381       1 shared_informer.go:320] Caches are synced for job
	I1014 08:47:24.057959   15224 command_runner.go:130] ! I1014 15:46:18.634623       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1014 08:47:24.057959   15224 command_runner.go:130] ! I1014 15:46:18.634920       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1014 08:47:24.057959   15224 command_runner.go:130] ! I1014 15:46:18.649619       1 shared_informer.go:320] Caches are synced for disruption
	I1014 08:47:24.058035   15224 command_runner.go:130] ! I1014 15:46:18.668155       1 shared_informer.go:320] Caches are synced for taint
	I1014 08:47:24.058035   15224 command_runner.go:130] ! I1014 15:46:18.670026       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1014 08:47:24.058035   15224 command_runner.go:130] ! I1014 15:46:18.680357       1 shared_informer.go:320] Caches are synced for stateful set
	I1014 08:47:24.058103   15224 command_runner.go:130] ! I1014 15:46:18.700582       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:24.058129   15224 command_runner.go:130] ! I1014 15:46:18.708812       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:18.714134       1 shared_informer.go:320] Caches are synced for daemon sets
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:18.718536       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:18.718841       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000-m02"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:18.719036       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000-m03"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:18.721210       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="135.448763ms"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:18.721514       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="40.1µs"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:18.721809       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="136.173363ms"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:18.722033       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.5µs"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:18.722234       1 node_lifecycle_controller.go:1036] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:18.777385       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:18.786812       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:18.833914       1 shared_informer.go:320] Caches are synced for attach detach
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:19.252391       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:19.267855       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:19.268119       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:59.871635       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:46:59.892163       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:47:03.736416       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:47:13.821153       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:47:20.979721       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="114.5µs"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:47:22.061324       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.05527ms"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:47:22.062652       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.8µs"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:47:22.098955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="28.422114ms"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:47:22.099794       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="313.699µs"
	I1014 08:47:24.058162   15224 command_runner.go:130] ! I1014 15:47:23.920002       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.074857   15224 logs.go:123] Gathering logs for Docker ...
	I1014 08:47:24.074857   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 08:47:24.107429   15224 command_runner.go:130] > Oct 14 15:44:44 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1014 08:47:24.107429   15224 command_runner.go:130] > Oct 14 15:44:44 minikube cri-dockerd[221]: time="2024-10-14T15:44:44Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I1014 08:47:24.107429   15224 command_runner.go:130] > Oct 14 15:44:44 minikube cri-dockerd[221]: time="2024-10-14T15:44:44Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1014 08:47:24.107429   15224 command_runner.go:130] > Oct 14 15:44:44 minikube cri-dockerd[221]: time="2024-10-14T15:44:44Z" level=info msg="Start docker client with request timeout 0s"
	I1014 08:47:24.108423   15224 command_runner.go:130] > Oct 14 15:44:44 minikube cri-dockerd[221]: time="2024-10-14T15:44:44Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1014 08:47:24.108456   15224 command_runner.go:130] > Oct 14 15:44:45 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1014 08:47:24.108585   15224 command_runner.go:130] > Oct 14 15:44:45 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1014 08:47:24.108585   15224 command_runner.go:130] > Oct 14 15:44:45 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:47 minikube cri-dockerd[413]: time="2024-10-14T15:44:47Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:47 minikube cri-dockerd[413]: time="2024-10-14T15:44:47Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:47 minikube cri-dockerd[413]: time="2024-10-14T15:44:47Z" level=info msg="Start docker client with request timeout 0s"
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:47 minikube cri-dockerd[413]: time="2024-10-14T15:44:47Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:49 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:49 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:49 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:50 minikube cri-dockerd[421]: time="2024-10-14T15:44:50Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:50 minikube cri-dockerd[421]: time="2024-10-14T15:44:50Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:50 minikube cri-dockerd[421]: time="2024-10-14T15:44:50Z" level=info msg="Start docker client with request timeout 0s"
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:50 minikube cri-dockerd[421]: time="2024-10-14T15:44:50Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:50 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:50 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:50 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:52 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:52 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:52 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:52 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:44:52 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:45:33 multinode-671000 systemd[1]: Starting Docker Application Container Engine...
	I1014 08:47:24.108648   15224 command_runner.go:130] > Oct 14 15:45:33 multinode-671000 dockerd[649]: time="2024-10-14T15:45:33.956984837Z" level=info msg="Starting up"
	I1014 08:47:24.109180   15224 command_runner.go:130] > Oct 14 15:45:33 multinode-671000 dockerd[649]: time="2024-10-14T15:45:33.957924243Z" level=info msg="containerd not running, starting managed containerd"
	I1014 08:47:24.109232   15224 command_runner.go:130] > Oct 14 15:45:33 multinode-671000 dockerd[649]: time="2024-10-14T15:45:33.959335951Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=655
	I1014 08:47:24.109232   15224 command_runner.go:130] > Oct 14 15:45:33 multinode-671000 dockerd[655]: time="2024-10-14T15:45:33.994773864Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	I1014 08:47:24.109232   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.020772213Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I1014 08:47:24.109310   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.021015015Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.021095615Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.021147816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.021828519Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.021976120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.022248222Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.022376622Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.022401523Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.022414623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.023030126Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.023715230Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.027058949Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.027212250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.027346050Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.027434351Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.028070055Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.028254556Z" level=info msg="metadata content store policy set" policy=shared
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.033722086Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.033900187Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.033927888Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.033944088Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.033959488Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.034029088Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.034638992Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I1014 08:47:24.109371   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.034898493Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I1014 08:47:24.109903   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.034993394Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I1014 08:47:24.109954   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035025394Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I1014 08:47:24.109954   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035042394Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.110051   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035056394Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.110051   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035070894Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035091294Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035125794Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035139394Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035152195Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035200795Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035227495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035242395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035255095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035268595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035283595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035296895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035314495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035330096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035343596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035364096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035376796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035388896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035401196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035419096Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035441896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035454496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035465896Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035512897Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035554297Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035568497Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I1014 08:47:24.110132   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035580597Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I1014 08:47:24.110755   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035590797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.110818   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035604297Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I1014 08:47:24.110818   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035619397Z" level=info msg="NRI interface is disabled by configuration."
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035934999Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.036229901Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.036295501Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.036322201Z" level=info msg="containerd successfully booted in 0.043787s"
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.016752326Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.204043816Z" level=info msg="Loading containers: start."
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.545951324Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.688138626Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.780023455Z" level=info msg="Loading containers: done."
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.809569125Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.809610125Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.809633825Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.810490930Z" level=info msg="Daemon has completed initialization"
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.853736479Z" level=info msg="API listen on /var/run/docker.sock"
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.854139881Z" level=info msg="API listen on [::]:2376"
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 systemd[1]: Started Docker Application Container Engine.
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 systemd[1]: Stopping Docker Application Container Engine...
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 dockerd[649]: time="2024-10-14T15:46:01.049459779Z" level=info msg="Processing signal 'terminated'"
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 dockerd[649]: time="2024-10-14T15:46:01.053392981Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 dockerd[649]: time="2024-10-14T15:46:01.053568081Z" level=info msg="Daemon shutdown complete"
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 dockerd[649]: time="2024-10-14T15:46:01.053889681Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 dockerd[649]: time="2024-10-14T15:46:01.054172781Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 systemd[1]: docker.service: Deactivated successfully.
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 systemd[1]: Stopped Docker Application Container Engine.
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 systemd[1]: Starting Docker Application Container Engine...
	I1014 08:47:24.110882   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:02.109177376Z" level=info msg="Starting up"
	I1014 08:47:24.111429   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:02.110667577Z" level=info msg="containerd not running, starting managed containerd"
	I1014 08:47:24.111499   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:02.112008177Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1093
	I1014 08:47:24.111499   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.143199292Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	I1014 08:47:24.111594   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168149004Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I1014 08:47:24.111594   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168197704Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168231304Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168244704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168266504Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168317904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168445004Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168531404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168550204Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168561104Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168583904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168690904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.171907506Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172002906Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172175606Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172377606Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172424606Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172461506Z" level=info msg="metadata content store policy set" policy=shared
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172795106Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172882406Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I1014 08:47:24.111661   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172902406Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I1014 08:47:24.112281   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172916306Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I1014 08:47:24.112281   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172930506Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I1014 08:47:24.112397   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172992206Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I1014 08:47:24.112468   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173380806Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I1014 08:47:24.112468   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173626906Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173734806Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173758306Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173794906Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173849506Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173864606Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173878206Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173900507Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173916207Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173928607Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173940507Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173959407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173973007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173985207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173998307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174010307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174023407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174035407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174047207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174077107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174095807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174107607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174191507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174206607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174229207Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I1014 08:47:24.112574   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174259307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.113105   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174352207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.113105   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174370407Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I1014 08:47:24.113173   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174499607Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I1014 08:47:24.113242   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174541907Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I1014 08:47:24.113242   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174556007Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I1014 08:47:24.113420   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174568207Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I1014 08:47:24.113420   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174578207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I1014 08:47:24.113530   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174598407Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I1014 08:47:24.113530   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174612107Z" level=info msg="NRI interface is disabled by configuration."
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174893107Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.175192307Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.175271607Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.175364007Z" level=info msg="containerd successfully booted in 0.032943s"
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.157176768Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.188626383Z" level=info msg="Loading containers: start."
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.419822091Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.533275144Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.631380390Z" level=info msg="Loading containers: done."
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.656005002Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.656245502Z" level=info msg="Daemon has completed initialization"
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.695426820Z" level=info msg="API listen on /var/run/docker.sock"
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 systemd[1]: Started Docker Application Container Engine.
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.695638120Z" level=info msg="API listen on [::]:2376"
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Start docker client with request timeout 0s"
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Loaded network plugin cni"
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Setting cgroupDriver cgroupfs"
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1014 08:47:24.113601   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Start cri-dockerd grpc backend"
	I1014 08:47:24.114137   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1014 08:47:24.114137   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:09Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7c65d6cfc9-fs9ct_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"2f8cc9a218fef4446839b0253827fc2dd89f8eccbd66ab7f0e5123c033f0aacc\""
	I1014 08:47:24.114249   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:09Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-7dff88458-vlp7j_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"06e529266db4b2cf3ad09285cddeb89d7d11e858b5156cf73f41198c9500b9f2\""
	I1014 08:47:24.114362   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.635579177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.114362   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.635817077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.114362   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.635919877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.114362   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.636083677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.114467   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.762883836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.114467   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.763092036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.114467   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.763114536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.114568   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.765440937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.114568   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1d3033f871fb11cb3095bcf5c5d43615de9685372a45edf226fe52b2f482bc71/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:24.114568   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.846488476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.114659   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.847106376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.114659   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.847254676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.114736   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.854373579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.114736   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.883112593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.114815   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.884477393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.116908   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.884514293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.116965   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.884605993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.117054   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6155e8be2d5d725a4259a45fe10f7ceb3fc581d528a6486633b563a59f331127/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:24.117149   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7bd4c36606eefc91e9ae07ea5683536fc78fdb6f7f752f44d28787b88540a878/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:24.117209   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.061102976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.117304   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.061201476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.117382   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.061221876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.061393176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0697a11790e806fa4d679e3d97be4c7193692d1cb8f76d882cf3a75aa8e0c238/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.312465294Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.312610794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.312646494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.312762794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.422697746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.422797746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.422816346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.423001046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.500801282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.501016583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.501037383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.504117984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:15Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.472267615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.472571215Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.472597215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.472873315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.475833517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.117442   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.476013017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.117976   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.476107917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.117976   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.476358717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.117976   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.515050835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.118115   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.515249635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.515393835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.515565835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6f8bdf552734e59df94d489a081585cdaeeaee729f0a0ae92105117d4d744f3f/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cdcdd532ba1369fe0e866b321f18e0bd99a88a2699c325eea17a070d48f2f024/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.911588321Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.913177522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.913368522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.914060722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7bcadf1f0885ff3411b477a64c6af366c77eb264ce7a761f174b079b11e2e703/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.063841193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.063929693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.063946093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.064242693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.206735160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.207544260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.208633061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.224429668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:47.556424473Z" level=info msg="ignoring event" container=c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:47.559859508Z" level=info msg="shim disconnected" id=c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6 namespace=moby
	I1014 08:47:24.118179   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:47.560270512Z" level=warning msg="cleaning up after shim disconnected" id=c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6 namespace=moby
	I1014 08:47:24.118713   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:47.560505714Z" level=info msg="cleaning up dead shim" namespace=moby
	I1014 08:47:24.118713   15224 command_runner.go:130] > Oct 14 15:47:03 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:03.070959923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.118713   15224 command_runner.go:130] > Oct 14 15:47:03 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:03.071176624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.118713   15224 command_runner.go:130] > Oct 14 15:47:03 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:03.071240924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.118871   15224 command_runner.go:130] > Oct 14 15:47:03 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:03.071756926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.118934   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.071716036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.118988   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.071943436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.119082   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.071968036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.119110   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.072116937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.119178   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:47:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/429b989a1a986d23a2e5aee0de1aef1e683a014bebb587981622bd80a3ac5221/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:24.119178   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.295865797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.119178   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.295993998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.119178   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.296019698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.119178   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.296117898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.119178   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:47:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9092d17516eb35243fd461a360605e738727838ee50f870f3bd6c290fd061d20/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I1014 08:47:24.119178   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.536751498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.119178   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.537062099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.119178   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.537100499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.119178   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.537246499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.119178   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.821494873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:24.119178   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.821592273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:24.119178   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.821611273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.119178   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.821730874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:24.150676   15224 logs.go:123] Gathering logs for dmesg ...
	I1014 08:47:24.150676   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 08:47:24.173686   15224 command_runner.go:130] > [Oct14 15:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I1014 08:47:24.173686   15224 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I1014 08:47:24.173686   15224 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I1014 08:47:24.173686   15224 command_runner.go:130] > [  +0.121183] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I1014 08:47:24.173686   15224 command_runner.go:130] > [  +0.024192] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I1014 08:47:24.173686   15224 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I1014 08:47:24.173686   15224 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I1014 08:47:24.173686   15224 command_runner.go:130] > [  +0.058588] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I1014 08:47:24.173686   15224 command_runner.go:130] > [  +0.021951] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I1014 08:47:24.173686   15224 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I1014 08:47:24.173686   15224 command_runner.go:130] > [  +5.764502] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I1014 08:47:24.173686   15224 command_runner.go:130] > [  +0.701221] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I1014 08:47:24.173686   15224 command_runner.go:130] > [  +1.823727] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I1014 08:47:24.173686   15224 command_runner.go:130] > [  +7.351082] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I1014 08:47:24.173686   15224 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I1014 08:47:24.173686   15224 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I1014 08:47:24.174542   15224 command_runner.go:130] > [Oct14 15:45] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	I1014 08:47:24.174542   15224 command_runner.go:130] > [  +0.175163] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	I1014 08:47:24.174542   15224 command_runner.go:130] > [ +26.061812] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	I1014 08:47:24.174542   15224 command_runner.go:130] > [  +0.098944] kauditd_printk_skb: 71 callbacks suppressed
	I1014 08:47:24.174542   15224 command_runner.go:130] > [  +0.531295] systemd-fstab-generator[1053]: Ignoring "noauto" option for root device
	I1014 08:47:24.174542   15224 command_runner.go:130] > [Oct14 15:46] systemd-fstab-generator[1065]: Ignoring "noauto" option for root device
	I1014 08:47:24.174542   15224 command_runner.go:130] > [  +0.229472] systemd-fstab-generator[1079]: Ignoring "noauto" option for root device
	I1014 08:47:24.174542   15224 command_runner.go:130] > [  +2.943333] systemd-fstab-generator[1310]: Ignoring "noauto" option for root device
	I1014 08:47:24.174542   15224 command_runner.go:130] > [  +0.192845] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	I1014 08:47:24.174542   15224 command_runner.go:130] > [  +0.209914] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	I1014 08:47:24.174542   15224 command_runner.go:130] > [  +0.290916] systemd-fstab-generator[1348]: Ignoring "noauto" option for root device
	I1014 08:47:24.174542   15224 command_runner.go:130] > [  +0.928050] systemd-fstab-generator[1473]: Ignoring "noauto" option for root device
	I1014 08:47:24.174542   15224 command_runner.go:130] > [  +0.103044] kauditd_printk_skb: 202 callbacks suppressed
	I1014 08:47:24.174542   15224 command_runner.go:130] > [  +3.884891] systemd-fstab-generator[1614]: Ignoring "noauto" option for root device
	I1014 08:47:24.174542   15224 command_runner.go:130] > [  +1.232270] kauditd_printk_skb: 44 callbacks suppressed
	I1014 08:47:24.174542   15224 command_runner.go:130] > [  +5.880292] kauditd_printk_skb: 30 callbacks suppressed
	I1014 08:47:24.174542   15224 command_runner.go:130] > [  +4.216972] systemd-fstab-generator[2460]: Ignoring "noauto" option for root device
	I1014 08:47:24.174542   15224 command_runner.go:130] > [ +15.813728] kauditd_printk_skb: 72 callbacks suppressed
	I1014 08:47:24.174542   15224 logs.go:123] Gathering logs for coredns [d9831e9f8ce8] ...
	I1014 08:47:24.174542   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9831e9f8ce8"
	I1014 08:47:24.215500   15224 command_runner.go:130] > .:53
	I1014 08:47:24.215601   15224 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	I1014 08:47:24.215601   15224 command_runner.go:130] > CoreDNS-1.11.3
	I1014 08:47:24.215601   15224 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I1014 08:47:24.215601   15224 command_runner.go:130] > [INFO] 127.0.0.1:35483 - 39257 "HINFO IN 8382239991273371198.8905610076788717940. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.074337261s
	I1014 08:47:24.215601   15224 command_runner.go:130] > [INFO] 10.244.1.2:36950 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003062s
	I1014 08:47:24.215601   15224 command_runner.go:130] > [INFO] 10.244.1.2:49277 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.118118924s
	I1014 08:47:24.215722   15224 command_runner.go:130] > [INFO] 10.244.1.2:33122 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.153089702s
	I1014 08:47:24.215722   15224 command_runner.go:130] > [INFO] 10.244.1.2:44549 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.188160849s
	I1014 08:47:24.215722   15224 command_runner.go:130] > [INFO] 10.244.0.3:43390 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191s
	I1014 08:47:24.215803   15224 command_runner.go:130] > [INFO] 10.244.0.3:59817 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000279499s
	I1014 08:47:24.215803   15224 command_runner.go:130] > [INFO] 10.244.0.3:34294 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.0002004s
	I1014 08:47:24.215803   15224 command_runner.go:130] > [INFO] 10.244.0.3:56220 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0002257s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:44291 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002098s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:42361 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.17965629s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:48756 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002923s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:53437 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000274799s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:60026 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.013560692s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:39241 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001752s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:36696 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0003084s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:51603 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001109s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:37516 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002057s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:41090 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001329s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:45330 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001397s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:46520 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000154699s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:40342 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001347s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:60385 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001518s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:56338 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001527s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:50480 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0002505s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:60726 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002297s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:44347 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002249s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:49092 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001075s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:54937 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:59749 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002552s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:56008 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002499s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:60338 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001065s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:49712 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001634s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:56508 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001577s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:45387 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001545s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:39608 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001419s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.1.2:53878 - 5 "PTR IN 1.96.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0001923s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:58589 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002828s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:58608 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001609s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:52599 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000633s
	I1014 08:47:24.215861   15224 command_runner.go:130] > [INFO] 10.244.0.3:58233 - 5 "PTR IN 1.96.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0001058s
	I1014 08:47:24.216397   15224 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I1014 08:47:24.216397   15224 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I1014 08:47:24.219691   15224 logs.go:123] Gathering logs for kindnet [fcdf89a3ac8c] ...
	I1014 08:47:24.219753   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcdf89a3ac8c"
	I1014 08:47:24.269436   15224 command_runner.go:130] ! I1014 15:32:44.862261       1 main.go:300] handling current node
	I1014 08:47:24.269824   15224 command_runner.go:130] ! I1014 15:32:44.862301       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.269824   15224 command_runner.go:130] ! I1014 15:32:44.862313       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.269947   15224 command_runner.go:130] ! I1014 15:32:44.862605       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.269947   15224 command_runner.go:130] ! I1014 15:32:44.862636       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.269947   15224 command_runner.go:130] ! I1014 15:32:54.862103       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.269947   15224 command_runner.go:130] ! I1014 15:32:54.862232       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.269947   15224 command_runner.go:130] ! I1014 15:32:54.862979       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.269947   15224 command_runner.go:130] ! I1014 15:32:54.863013       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.269947   15224 command_runner.go:130] ! I1014 15:32:54.863219       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.270251   15224 command_runner.go:130] ! I1014 15:32:54.863233       1 main.go:300] handling current node
	I1014 08:47:24.270298   15224 command_runner.go:130] ! I1014 15:33:04.864377       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.270298   15224 command_runner.go:130] ! I1014 15:33:04.864510       1 main.go:300] handling current node
	I1014 08:47:24.270298   15224 command_runner.go:130] ! I1014 15:33:04.864534       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.270985   15224 command_runner.go:130] ! I1014 15:33:04.864544       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.270985   15224 command_runner.go:130] ! I1014 15:33:04.864795       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.270985   15224 command_runner.go:130] ! I1014 15:33:04.864807       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.271372   15224 command_runner.go:130] ! I1014 15:33:14.870098       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.272323   15224 command_runner.go:130] ! I1014 15:33:14.870279       1 main.go:300] handling current node
	I1014 08:47:24.272323   15224 command_runner.go:130] ! I1014 15:33:14.870319       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.272323   15224 command_runner.go:130] ! I1014 15:33:14.870394       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.272543   15224 command_runner.go:130] ! I1014 15:33:14.872221       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.273384   15224 command_runner.go:130] ! I1014 15:33:14.872265       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.273749   15224 command_runner.go:130] ! I1014 15:33:24.862168       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.274059   15224 command_runner.go:130] ! I1014 15:33:24.862234       1 main.go:300] handling current node
	I1014 08:47:24.275045   15224 command_runner.go:130] ! I1014 15:33:24.862290       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.275142   15224 command_runner.go:130] ! I1014 15:33:24.862303       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.275167   15224 command_runner.go:130] ! I1014 15:33:24.862799       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.275167   15224 command_runner.go:130] ! I1014 15:33:24.862950       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.275167   15224 command_runner.go:130] ! I1014 15:33:34.870712       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.275167   15224 command_runner.go:130] ! I1014 15:33:34.870952       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.275167   15224 command_runner.go:130] ! I1014 15:33:34.871749       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.275167   15224 command_runner.go:130] ! I1014 15:33:34.871848       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.275167   15224 command_runner.go:130] ! I1014 15:33:34.872312       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.275167   15224 command_runner.go:130] ! I1014 15:33:34.872409       1 main.go:300] handling current node
	I1014 08:47:24.275167   15224 command_runner.go:130] ! I1014 15:33:44.868271       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.275756   15224 command_runner.go:130] ! I1014 15:33:44.868442       1 main.go:300] handling current node
	I1014 08:47:24.275831   15224 command_runner.go:130] ! I1014 15:33:44.868482       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.275874   15224 command_runner.go:130] ! I1014 15:33:44.868509       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:33:44.869165       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:33:44.869252       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:33:54.862162       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:33:54.862365       1 main.go:300] handling current node
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:33:54.862404       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:33:54.862429       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:33:54.862766       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:33:54.862800       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:04.870860       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:04.870993       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:04.871751       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:04.871830       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:04.872365       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:04.872444       1 main.go:300] handling current node
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:14.868274       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:14.868410       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:14.869151       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:14.869244       1 main.go:300] handling current node
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:14.869263       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:14.869271       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:24.869326       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:24.869383       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:24.870365       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:24.870464       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:24.871197       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:24.871235       1 main.go:300] handling current node
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:34.862280       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:34.862387       1 main.go:300] handling current node
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:34.862420       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:34.862440       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:34.862809       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.275911   15224 command_runner.go:130] ! I1014 15:34:34.862844       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.276439   15224 command_runner.go:130] ! I1014 15:34:44.870611       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.276439   15224 command_runner.go:130] ! I1014 15:34:44.870703       1 main.go:300] handling current node
	I1014 08:47:24.276439   15224 command_runner.go:130] ! I1014 15:34:44.870732       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.276517   15224 command_runner.go:130] ! I1014 15:34:44.870826       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.276517   15224 command_runner.go:130] ! I1014 15:34:44.871348       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.276517   15224 command_runner.go:130] ! I1014 15:34:44.871437       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.276573   15224 command_runner.go:130] ! I1014 15:34:54.862260       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.276573   15224 command_runner.go:130] ! I1014 15:34:54.862358       1 main.go:300] handling current node
	I1014 08:47:24.276573   15224 command_runner.go:130] ! I1014 15:34:54.862379       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:34:54.862388       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:34:54.862782       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:34:54.862862       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:04.871418       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:04.871489       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:04.872322       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:04.872416       1 main.go:300] handling current node
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:04.872437       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:04.872445       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:14.870301       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:14.870413       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:14.870922       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:14.870941       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:14.871055       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:14.871086       1 main.go:300] handling current node
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:24.870776       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:24.870814       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:24.871449       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:24.871682       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:24.872057       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:24.872149       1 main.go:300] handling current node
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:34.871155       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:34.871422       1 main.go:300] handling current node
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:34.876612       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:34.876630       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:34.876808       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:34.876817       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:44.872405       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:44.872450       1 main.go:300] handling current node
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:44.872467       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:44.872473       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:44.873120       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:44.873155       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:54.862113       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:54.862220       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:54.862608       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:54.862725       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:54.862993       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:35:54.863089       1 main.go:300] handling current node
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:04.870594       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:04.870634       1 main.go:300] handling current node
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:04.870705       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:04.870719       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:04.871246       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:04.871261       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:14.862194       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:14.862337       1 main.go:300] handling current node
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:14.862361       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:14.862370       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:14.863024       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:14.863053       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:24.870839       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:24.871114       1 main.go:300] handling current node
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:24.871303       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.276632   15224 command_runner.go:130] ! I1014 15:36:24.871618       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:24.872052       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:24.872164       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:34.870320       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:34.870375       1 main.go:300] handling current node
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:34.870396       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:34.870404       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:34.870774       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:34.870810       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:44.864305       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:44.864530       1 main.go:300] handling current node
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:44.864616       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:44.864683       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:44.865206       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:44.865241       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:54.862701       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:54.862834       1 main.go:300] handling current node
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:54.862940       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:54.863054       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:54.864321       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:36:54.864397       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:04.863761       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:04.863854       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:04.864505       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:04.864638       1 main.go:300] handling current node
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:04.864656       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:04.864664       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:14.866293       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:14.866653       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:14.867034       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:14.867067       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:14.867179       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:14.867247       1 main.go:300] handling current node
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:24.867969       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:24.868019       1 main.go:300] handling current node
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:24.868036       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:24.868043       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:24.868511       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:24.868549       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:34.863786       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:34.864224       1 main.go:300] handling current node
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:34.864384       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:34.864448       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:34.864771       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:34.864865       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:44.871310       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:44.871352       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:44.871803       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:44.871837       1 main.go:300] handling current node
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:44.871852       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:44.871859       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:54.862573       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:54.862694       1 main.go:300] handling current node
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:54.862714       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:54.862723       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:54.863288       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:37:54.863364       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:04.872124       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:04.872285       1 main.go:300] handling current node
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:04.872330       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:04.872343       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:04.873184       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:04.873352       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:14.863654       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:14.863788       1 main.go:300] handling current node
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:14.863812       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:14.863822       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:14.864488       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:14.864585       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:24.868537       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:24.868643       1 main.go:300] handling current node
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:24.868664       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.277500   15224 command_runner.go:130] ! I1014 15:38:24.868672       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:24.869258       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:24.869347       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:34.864233       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:34.864469       1 main.go:300] handling current node
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:34.864497       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:34.864509       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:34.865023       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:34.865061       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:44.870754       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:44.870859       1 main.go:300] handling current node
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:44.870919       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:44.870931       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:44.871124       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:44.871155       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:54.862849       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:54.863008       1 main.go:300] handling current node
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:54.863029       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:54.863040       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:54.863313       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:38:54.863343       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:04.861865       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:04.862353       1 main.go:300] handling current node
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:04.862819       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:04.863053       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:04.863648       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:04.865127       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:14.870473       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:14.870526       1 main.go:300] handling current node
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:14.870544       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:14.870551       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:14.871123       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:14.871161       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:24.862264       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:24.862304       1 main.go:300] handling current node
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:24.862323       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:24.862331       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:24.863326       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:24.863417       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:34.862868       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:34.863041       1 main.go:300] handling current node
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:34.863063       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:34.863072       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:34.863370       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:34.863460       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:44.872051       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:44.872175       1 main.go:300] handling current node
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:44.872198       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:44.872392       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:44.873038       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:44.873160       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:54.862953       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:54.862990       1 main.go:300] handling current node
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:54.863013       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:54.863022       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:54.863377       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:39:54.863412       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:40:04.864160       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:40:04.864198       1 main.go:300] handling current node
	I1014 08:47:24.278488   15224 command_runner.go:130] ! I1014 15:40:04.864216       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:04.864222       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:04.864390       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:04.864399       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:14.862864       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:14.863081       1 main.go:300] handling current node
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:14.863442       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:14.863496       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:14.864019       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:14.864052       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:24.867383       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:24.867717       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:24.868487       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:24.868619       1 main.go:300] handling current node
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:24.868640       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:24.868650       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:34.866060       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:34.866194       1 main.go:300] handling current node
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:34.866224       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:34.866240       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:34.867632       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:34.867868       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:44.875002       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:44.875336       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:44.875792       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:44.875991       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:44.876302       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:44.876531       1 main.go:300] handling current node
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:54.862504       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:54.862640       1 main.go:300] handling current node
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:54.862766       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:54.862834       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:54.863108       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:40:54.863140       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:04.863181       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:04.863304       1 main.go:300] handling current node
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:04.863326       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:04.863335       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:04.863824       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:04.863963       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:14.868270       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:14.868443       1 main.go:300] handling current node
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:14.868487       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:14.868541       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:14.868808       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:14.868843       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:24.862261       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:24.862508       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:24.863242       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:24.863792       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:24.864172       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:24.864327       1 main.go:300] handling current node
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:34.862294       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:34.862355       1 main.go:300] handling current node
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:34.862377       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:34.862385       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:44.862674       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:44.862799       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:44.863254       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.20.102.29 Flags: [] Table: 0 Realm: 0} 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:44.863509       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:44.863768       1 main.go:300] handling current node
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:44.863945       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:44.864052       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:54.862083       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:54.862208       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:54.862577       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:54.862723       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:54.863005       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:41:54.863097       1 main.go:300] handling current node
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:42:04.870504       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:42:04.871039       1 main.go:300] handling current node
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:42:04.871167       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.279489   15224 command_runner.go:130] ! I1014 15:42:04.871277       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:04.871721       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:04.871740       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:14.862252       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:14.862455       1 main.go:300] handling current node
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:14.862499       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:14.862521       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:14.863189       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:14.863224       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:24.862819       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:24.863072       1 main.go:300] handling current node
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:24.863093       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:24.863103       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:24.864093       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:24.864136       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:34.863373       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:34.863425       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:34.863670       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:34.863742       1 main.go:300] handling current node
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:34.863763       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:34.863771       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:44.861842       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:44.862176       1 main.go:300] handling current node
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:44.862271       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:44.862357       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:44.862743       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:44.863009       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:54.863140       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:54.863181       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:54.863865       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:54.864051       1 main.go:300] handling current node
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:54.864417       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:42:54.864427       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:04.862539       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:04.862625       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:04.863289       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:04.863395       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:04.863612       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:04.863764       1 main.go:300] handling current node
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:14.871242       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:14.871727       1 main.go:300] handling current node
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:14.871818       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:14.871846       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:14.872085       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:14.872201       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:24.871405       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:24.871540       1 main.go:300] handling current node
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:24.871566       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:24.871575       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:24.871835       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:24.872193       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:34.863042       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:34.863237       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:34.863962       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:34.864059       1 main.go:300] handling current node
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:34.864077       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:34.864085       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:44.871016       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:44.871057       1 main.go:300] handling current node
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:44.871074       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:44.871081       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:44.871299       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.280487   15224 command_runner.go:130] ! I1014 15:43:44.871310       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.298501   15224 logs.go:123] Gathering logs for container status ...
	I1014 08:47:24.298501   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 08:47:24.363797   15224 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I1014 08:47:24.363923   15224 command_runner.go:130] > 1adddc667bd90       8c811b4aec35f                                                                                         4 seconds ago        Running             busybox                   1                   9092d17516eb3       busybox-7dff88458-vlp7j
	I1014 08:47:24.363978   15224 command_runner.go:130] > 5d223e2e64fcd       c69fa2e9cbf5f                                                                                         4 seconds ago        Running             coredns                   1                   429b989a1a986       coredns-7c65d6cfc9-fs9ct
	I1014 08:47:24.363978   15224 command_runner.go:130] > 9d526b02ee41c       6e38f40d628db                                                                                         22 seconds ago       Running             storage-provisioner       2                   cdcdd532ba136       storage-provisioner
	I1014 08:47:24.363978   15224 command_runner.go:130] > bba035362eb97       3a5bc24055c9e                                                                                         About a minute ago   Running             kindnet-cni               1                   7bcadf1f0885f       kindnet-wqrx6
	I1014 08:47:24.364067   15224 command_runner.go:130] > c76c258568107       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   cdcdd532ba136       storage-provisioner
	I1014 08:47:24.364097   15224 command_runner.go:130] > e83db276dec37       60c005f310ff3                                                                                         About a minute ago   Running             kube-proxy                1                   6f8bdf552734e       kube-proxy-r74dx
	I1014 08:47:24.364097   15224 command_runner.go:130] > 48c8492e231e1       2e96e5913fc06                                                                                         About a minute ago   Running             etcd                      0                   0697a11790e80       etcd-multinode-671000
	I1014 08:47:24.364189   15224 command_runner.go:130] > 8af48c446f7e1       175ffd71cce3d                                                                                         About a minute ago   Running             kube-controller-manager   1                   7bd4c36606eef       kube-controller-manager-multinode-671000
	I1014 08:47:24.364239   15224 command_runner.go:130] > a834664fc8b80       6bab7719df100                                                                                         About a minute ago   Running             kube-apiserver            0                   6155e8be2d5d7       kube-apiserver-multinode-671000
	I1014 08:47:24.364268   15224 command_runner.go:130] > d428685276e1e       9aa1fad941575                                                                                         About a minute ago   Running             kube-scheduler            1                   1d3033f871fb1       kube-scheduler-multinode-671000
	I1014 08:47:24.364268   15224 command_runner.go:130] > cbf0b40e378b9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   06e529266db4b       busybox-7dff88458-vlp7j
	I1014 08:47:24.364331   15224 command_runner.go:130] > d9831e9f8ce8c       c69fa2e9cbf5f                                                                                         24 minutes ago       Exited              coredns                   0                   2f8cc9a218fef       coredns-7c65d6cfc9-fs9ct
	I1014 08:47:24.364390   15224 command_runner.go:130] > fcdf89a3ac8ce       kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387              24 minutes ago       Exited              kindnet-cni               0                   5e48ddcfdf90a       kindnet-wqrx6
	I1014 08:47:24.364432   15224 command_runner.go:130] > ea19428d70363       60c005f310ff3                                                                                         24 minutes ago       Exited              kube-proxy                0                   7144d8ce208cf       kube-proxy-r74dx
	I1014 08:47:24.364477   15224 command_runner.go:130] > 661e75bbf6b46       9aa1fad941575                                                                                         24 minutes ago       Exited              kube-scheduler            0                   2dc78387553ff       kube-scheduler-multinode-671000
	I1014 08:47:24.364545   15224 command_runner.go:130] > 712aad669c9f6       175ffd71cce3d                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   bfdde08319e32       kube-controller-manager-multinode-671000
	I1014 08:47:24.366660   15224 logs.go:123] Gathering logs for coredns [5d223e2e64fc] ...
	I1014 08:47:24.366660   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d223e2e64fc"
	I1014 08:47:24.400410   15224 command_runner.go:130] > .:53
	I1014 08:47:24.400712   15224 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	I1014 08:47:24.400712   15224 command_runner.go:130] > CoreDNS-1.11.3
	I1014 08:47:24.400712   15224 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I1014 08:47:24.400712   15224 command_runner.go:130] > [INFO] 127.0.0.1:42996 - 9104 "HINFO IN 5434967794797104596.5472118418078127170. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.148386647s
	I1014 08:47:24.400945   15224 logs.go:123] Gathering logs for kindnet [bba035362eb9] ...
	I1014 08:47:24.401030   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bba035362eb9"
	I1014 08:47:24.436616   15224 command_runner.go:130] ! I1014 15:46:18.000845       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1014 08:47:24.437607   15224 command_runner.go:130] ! I1014 15:46:18.015386       1 main.go:139] hostIP = 172.20.106.123
	I1014 08:47:24.437607   15224 command_runner.go:130] ! podIP = 172.20.106.123
	I1014 08:47:24.437607   15224 command_runner.go:130] ! I1014 15:46:18.015613       1 main.go:148] setting mtu 1500 for CNI 
	I1014 08:47:24.437695   15224 command_runner.go:130] ! I1014 15:46:18.015630       1 main.go:178] kindnetd IP family: "ipv4"
	I1014 08:47:24.437695   15224 command_runner.go:130] ! I1014 15:46:18.015641       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I1014 08:47:24.437695   15224 command_runner.go:130] ! I1014 15:46:18.919987       1 main.go:238] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-37: Error: Could not process rule: Operation not supported
	I1014 08:47:24.437767   15224 command_runner.go:130] ! add table inet kube-network-policies
	I1014 08:47:24.437767   15224 command_runner.go:130] ! ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	I1014 08:47:24.437767   15224 command_runner.go:130] ! , skipping network policies
	I1014 08:47:24.437767   15224 command_runner.go:130] ! W1014 15:46:48.934772       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1014 08:47:24.437846   15224 command_runner.go:130] ! E1014 15:46:48.935157       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError"
	I1014 08:47:24.437846   15224 command_runner.go:130] ! I1014 15:46:58.925780       1 main.go:296] Handling node with IPs: map[172.20.106.123:{}]
	I1014 08:47:24.437846   15224 command_runner.go:130] ! I1014 15:46:58.926393       1 main.go:300] handling current node
	I1014 08:47:24.437916   15224 command_runner.go:130] ! I1014 15:46:58.927562       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.437916   15224 command_runner.go:130] ! I1014 15:46:58.927665       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.437916   15224 command_runner.go:130] ! I1014 15:46:58.928645       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.20.109.137 Flags: [] Table: 0 Realm: 0} 
	I1014 08:47:24.437977   15224 command_runner.go:130] ! I1014 15:46:58.929412       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.437977   15224 command_runner.go:130] ! I1014 15:46:58.929466       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.438026   15224 command_runner.go:130] ! I1014 15:46:58.929555       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.20.102.29 Flags: [] Table: 0 Realm: 0} 
	I1014 08:47:24.438057   15224 command_runner.go:130] ! I1014 15:47:08.930440       1 main.go:296] Handling node with IPs: map[172.20.106.123:{}]
	I1014 08:47:24.438057   15224 command_runner.go:130] ! I1014 15:47:08.930586       1 main.go:300] handling current node
	I1014 08:47:24.438057   15224 command_runner.go:130] ! I1014 15:47:08.930648       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.438057   15224 command_runner.go:130] ! I1014 15:47:08.930739       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.438057   15224 command_runner.go:130] ! I1014 15:47:08.931080       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.438057   15224 command_runner.go:130] ! I1014 15:47:08.931268       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.438057   15224 command_runner.go:130] ! I1014 15:47:18.921538       1 main.go:296] Handling node with IPs: map[172.20.106.123:{}]
	I1014 08:47:24.438057   15224 command_runner.go:130] ! I1014 15:47:18.921639       1 main.go:300] handling current node
	I1014 08:47:24.438057   15224 command_runner.go:130] ! I1014 15:47:18.921689       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:24.438057   15224 command_runner.go:130] ! I1014 15:47:18.921698       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:24.438057   15224 command_runner.go:130] ! I1014 15:47:18.922117       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:24.438057   15224 command_runner.go:130] ! I1014 15:47:18.922190       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:24.440700   15224 logs.go:123] Gathering logs for etcd [48c8492e231e] ...
	I1014 08:47:24.440700   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48c8492e231e"
	I1014 08:47:24.469663   15224 command_runner.go:130] ! {"level":"warn","ts":"2024-10-14T15:46:11.845953Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1014 08:47:24.470625   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.848739Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.20.106.123:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.20.106.123:2380","--initial-cluster=multinode-671000=https://172.20.106.123:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.20.106.123:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.20.106.123:2380","--name=multinode-671000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I1014 08:47:24.470651   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.848857Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I1014 08:47:24.470651   15224 command_runner.go:130] ! {"level":"warn","ts":"2024-10-14T15:46:11.848886Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1014 08:47:24.470651   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.848900Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://172.20.106.123:2380"]}
	I1014 08:47:24.470723   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.848962Z","caller":"embed/etcd.go:496","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1014 08:47:24.470723   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.854418Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.20.106.123:2379"]}
	I1014 08:47:24.470891   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.857036Z","caller":"embed/etcd.go:310","msg":"starting an etcd server","etcd-version":"3.5.15","git-sha":"9a5533382","go-version":"go1.21.12","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-671000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.20.106.123:2380"],"listen-peer-urls":["https://172.20.106.123:2380"],"advertise-client-urls":["https://172.20.106.123:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.20.106.123:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"in
itial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I1014 08:47:24.470956   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.899392Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"40.66952ms"}
	I1014 08:47:24.470983   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.949173Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I1014 08:47:24.471030   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.984197Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"2dcbff584edb18cc","local-member-id":"782c48cbdf98397b","commit-index":2088}
	I1014 08:47:24.471030   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.985089Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b switched to configuration voters=()"}
	I1014 08:47:24.471030   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.987739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b became follower at term 2"}
	I1014 08:47:24.471030   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.987772Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 782c48cbdf98397b [peers: [], term: 2, commit: 2088, applied: 0, lastindex: 2088, lastterm: 2]"}
	I1014 08:47:24.471160   15224 command_runner.go:130] ! {"level":"warn","ts":"2024-10-14T15:46:12.003567Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I1014 08:47:24.471160   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.010981Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1396}
	I1014 08:47:24.471321   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.025362Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":1813}
	I1014 08:47:24.471321   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.035174Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I1014 08:47:24.471321   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.045608Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"782c48cbdf98397b","timeout":"7s"}
	I1014 08:47:24.471406   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.046705Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"782c48cbdf98397b"}
	I1014 08:47:24.471406   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.046807Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"782c48cbdf98397b","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	I1014 08:47:24.471406   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.047198Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	I1014 08:47:24.471473   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.047977Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I1014 08:47:24.471532   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.048044Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I1014 08:47:24.471532   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.048058Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I1014 08:47:24.471532   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.048736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b switched to configuration voters=(8659376223993477499)"}
	I1014 08:47:24.471532   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.049262Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2dcbff584edb18cc","local-member-id":"782c48cbdf98397b","added-peer-id":"782c48cbdf98397b","added-peer-peer-urls":["https://172.20.100.167:2380"]}
	I1014 08:47:24.471532   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.049694Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2dcbff584edb18cc","local-member-id":"782c48cbdf98397b","cluster-version":"3.5"}
	I1014 08:47:24.471532   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.049815Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I1014 08:47:24.471532   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.056204Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1014 08:47:24.471532   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.062166Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1014 08:47:24.471532   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.062574Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"782c48cbdf98397b","initial-advertise-peer-urls":["https://172.20.106.123:2380"],"listen-peer-urls":["https://172.20.106.123:2380"],"advertise-client-urls":["https://172.20.106.123:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.20.106.123:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I1014 08:47:24.471532   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.062654Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I1014 08:47:24.471532   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.062749Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"172.20.106.123:2380"}
	I1014 08:47:24.471532   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.062764Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"172.20.106.123:2380"}
	I1014 08:47:24.471532   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b is starting a new election at term 2"}
	I1014 08:47:24.472162   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b became pre-candidate at term 2"}
	I1014 08:47:24.472162   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b received MsgPreVoteResp from 782c48cbdf98397b at term 2"}
	I1014 08:47:24.472162   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b became candidate at term 3"}
	I1014 08:47:24.472261   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b received MsgVoteResp from 782c48cbdf98397b at term 3"}
	I1014 08:47:24.472261   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b became leader at term 3"}
	I1014 08:47:24.472261   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 782c48cbdf98397b elected leader 782c48cbdf98397b at term 3"}
	I1014 08:47:24.472346   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.496949Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I1014 08:47:24.472372   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.496902Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"782c48cbdf98397b","local-member-attributes":"{Name:multinode-671000 ClientURLs:[https://172.20.106.123:2379]}","request-path":"/0/members/782c48cbdf98397b/attributes","cluster-id":"2dcbff584edb18cc","publish-timeout":"7s"}
	I1014 08:47:24.472402   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.497822Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I1014 08:47:24.472402   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.499586Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I1014 08:47:24.472402   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.499631Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I1014 08:47:24.472402   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.500815Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1014 08:47:24.472402   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.502392Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.20.106.123:2379"}
	I1014 08:47:24.472402   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.503879Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1014 08:47:24.472402   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.505686Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I1014 08:47:24.479270   15224 logs.go:123] Gathering logs for kube-proxy [e83db276dec3] ...
	I1014 08:47:24.479270   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83db276dec3"
	I1014 08:47:24.511770   15224 command_runner.go:130] ! I1014 15:46:17.821967       1 server_linux.go:66] "Using iptables proxy"
	I1014 08:47:24.512352   15224 command_runner.go:130] ! E1014 15:46:17.985243       1 proxier.go:734] "Error cleaning up nftables rules" err=<
	I1014 08:47:24.512352   15224 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I1014 08:47:24.512429   15224 command_runner.go:130] ! 	add table ip kube-proxy
	I1014 08:47:24.512429   15224 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I1014 08:47:24.512429   15224 command_runner.go:130] !  >
	I1014 08:47:24.513427   15224 command_runner.go:130] ! E1014 15:46:18.020523       1 proxier.go:734] "Error cleaning up nftables rules" err=<
	I1014 08:47:24.513943   15224 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I1014 08:47:24.513943   15224 command_runner.go:130] ! 	add table ip6 kube-proxy
	I1014 08:47:24.513943   15224 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I1014 08:47:24.513943   15224 command_runner.go:130] !  >
	I1014 08:47:24.513943   15224 command_runner.go:130] ! I1014 15:46:18.173230       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.20.106.123"]
	I1014 08:47:24.513943   15224 command_runner.go:130] ! E1014 15:46:18.173392       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 08:47:24.513943   15224 command_runner.go:130] ! I1014 15:46:18.286207       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 08:47:24.514109   15224 command_runner.go:130] ! I1014 15:46:18.287289       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 08:47:24.514109   15224 command_runner.go:130] ! I1014 15:46:18.287905       1 server_linux.go:169] "Using iptables Proxier"
	I1014 08:47:24.514109   15224 command_runner.go:130] ! I1014 15:46:18.293792       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 08:47:24.514109   15224 command_runner.go:130] ! I1014 15:46:18.300740       1 server.go:483] "Version info" version="v1.31.1"
	I1014 08:47:24.514196   15224 command_runner.go:130] ! I1014 15:46:18.300778       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:24.514196   15224 command_runner.go:130] ! I1014 15:46:18.305824       1 config.go:199] "Starting service config controller"
	I1014 08:47:24.514196   15224 command_runner.go:130] ! I1014 15:46:18.308209       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 08:47:24.514257   15224 command_runner.go:130] ! I1014 15:46:18.308868       1 config.go:328] "Starting node config controller"
	I1014 08:47:24.514257   15224 command_runner.go:130] ! I1014 15:46:18.314183       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 08:47:24.514257   15224 command_runner.go:130] ! I1014 15:46:18.309398       1 config.go:105] "Starting endpoint slice config controller"
	I1014 08:47:24.514257   15224 command_runner.go:130] ! I1014 15:46:18.317842       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 08:47:24.514320   15224 command_runner.go:130] ! I1014 15:46:18.419882       1 shared_informer.go:320] Caches are synced for node config
	I1014 08:47:24.514320   15224 command_runner.go:130] ! I1014 15:46:18.419918       1 shared_informer.go:320] Caches are synced for service config
	I1014 08:47:24.514320   15224 command_runner.go:130] ! I1014 15:46:18.435586       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 08:47:24.516804   15224 logs.go:123] Gathering logs for kube-controller-manager [712aad669c9f] ...
	I1014 08:47:24.517328   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712aad669c9f"
	I1014 08:47:24.557349   15224 command_runner.go:130] ! I1014 15:22:34.276457       1 serving.go:386] Generated self-signed cert in-memory
	I1014 08:47:24.557880   15224 command_runner.go:130] ! I1014 15:22:34.721812       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I1014 08:47:24.557880   15224 command_runner.go:130] ! I1014 15:22:34.722099       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:24.557880   15224 command_runner.go:130] ! I1014 15:22:34.724748       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1014 08:47:24.557880   15224 command_runner.go:130] ! I1014 15:22:34.725085       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 08:47:24.557993   15224 command_runner.go:130] ! I1014 15:22:34.725754       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1014 08:47:24.557993   15224 command_runner.go:130] ! I1014 15:22:34.725985       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 08:47:24.557993   15224 command_runner.go:130] ! I1014 15:22:39.207411       1 controllermanager.go:797] "Started controller" controller="serviceaccount-token-controller"
	I1014 08:47:24.558095   15224 command_runner.go:130] ! I1014 15:22:39.208026       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I1014 08:47:24.558153   15224 command_runner.go:130] ! I1014 15:22:39.207651       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I1014 08:47:24.558153   15224 command_runner.go:130] ! I1014 15:22:39.210064       1 shared_informer.go:320] Caches are synced for tokens
	I1014 08:47:24.558153   15224 command_runner.go:130] ! I1014 15:22:39.224528       1 controllermanager.go:797] "Started controller" controller="taint-eviction-controller"
	I1014 08:47:24.558153   15224 command_runner.go:130] ! I1014 15:22:39.224966       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I1014 08:47:24.558223   15224 command_runner.go:130] ! I1014 15:22:39.225213       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I1014 08:47:24.558223   15224 command_runner.go:130] ! I1014 15:22:39.226734       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I1014 08:47:24.558284   15224 command_runner.go:130] ! I1014 15:22:39.238395       1 controllermanager.go:797] "Started controller" controller="pod-garbage-collector-controller"
	I1014 08:47:24.558309   15224 command_runner.go:130] ! I1014 15:22:39.238610       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I1014 08:47:24.558309   15224 command_runner.go:130] ! I1014 15:22:39.239186       1 shared_informer.go:313] Waiting for caches to sync for GC
	I1014 08:47:24.558350   15224 command_runner.go:130] ! I1014 15:22:39.257957       1 controllermanager.go:797] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I1014 08:47:24.558395   15224 command_runner.go:130] ! I1014 15:22:39.258113       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I1014 08:47:24.558395   15224 command_runner.go:130] ! I1014 15:22:39.264110       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I1014 08:47:24.558395   15224 command_runner.go:130] ! I1014 15:22:39.291746       1 controllermanager.go:797] "Started controller" controller="token-cleaner-controller"
	I1014 08:47:24.558458   15224 command_runner.go:130] ! I1014 15:22:39.291968       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I1014 08:47:24.558458   15224 command_runner.go:130] ! I1014 15:22:39.292012       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I1014 08:47:24.558458   15224 command_runner.go:130] ! I1014 15:22:39.292035       1 shared_informer.go:320] Caches are synced for token_cleaner
	I1014 08:47:24.558458   15224 command_runner.go:130] ! E1014 15:22:39.298368       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I1014 08:47:24.558546   15224 command_runner.go:130] ! I1014 15:22:39.298490       1 controllermanager.go:775] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.320068       1 controllermanager.go:797] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.321579       1 pvc_protection_controller.go:105] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.322507       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.334562       1 controllermanager.go:797] "Started controller" controller="endpointslice-mirroring-controller"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.335065       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.335174       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.357454       1 controllermanager.go:797] "Started controller" controller="serviceaccount-controller"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.357636       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.357669       1 shared_informer.go:313] Waiting for caches to sync for service account
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.377687       1 controllermanager.go:797] "Started controller" controller="daemonset-controller"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.378056       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.378087       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.416186       1 controllermanager.go:797] "Started controller" controller="disruption-controller"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.416643       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.417022       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.417371       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.469032       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.469507       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.469770       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.470779       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-signing-controller"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.470793       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.471453       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.470805       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.470829       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.471957       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.470841       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.470861       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.472955       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I1014 08:47:24.558577   15224 command_runner.go:130] ! I1014 15:22:39.470870       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:24.559113   15224 command_runner.go:130] ! I1014 15:22:39.621859       1 controllermanager.go:797] "Started controller" controller="persistentvolume-binder-controller"
	I1014 08:47:24.559113   15224 command_runner.go:130] ! I1014 15:22:39.622638       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I1014 08:47:24.559179   15224 command_runner.go:130] ! I1014 15:22:39.623052       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I1014 08:47:24.559179   15224 command_runner.go:130] ! I1014 15:22:39.777984       1 controllermanager.go:797] "Started controller" controller="root-ca-certificate-publisher-controller"
	I1014 08:47:24.559179   15224 command_runner.go:130] ! I1014 15:22:39.778063       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I1014 08:47:24.559179   15224 command_runner.go:130] ! I1014 15:22:39.778141       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I1014 08:47:24.559179   15224 command_runner.go:130] ! I1014 15:22:39.918879       1 controllermanager.go:797] "Started controller" controller="replicationcontroller-controller"
	I1014 08:47:24.559179   15224 command_runner.go:130] ! I1014 15:22:39.919046       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I1014 08:47:24.559179   15224 command_runner.go:130] ! I1014 15:22:39.919060       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I1014 08:47:24.559179   15224 command_runner.go:130] ! I1014 15:22:40.166453       1 controllermanager.go:797] "Started controller" controller="garbage-collector-controller"
	I1014 08:47:24.559414   15224 command_runner.go:130] ! I1014 15:22:40.167822       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I1014 08:47:24.559414   15224 command_runner.go:130] ! I1014 15:22:40.168483       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1014 08:47:24.559414   15224 command_runner.go:130] ! I1014 15:22:40.168745       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I1014 08:47:24.559414   15224 command_runner.go:130] ! I1014 15:22:40.423412       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I1014 08:47:24.559514   15224 command_runner.go:130] ! I1014 15:22:40.423795       1 controllermanager.go:797] "Started controller" controller="node-ipam-controller"
	I1014 08:47:24.559514   15224 command_runner.go:130] ! I1014 15:22:40.424239       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I1014 08:47:24.559583   15224 command_runner.go:130] ! I1014 15:22:40.424496       1 controllermanager.go:775] "Warning: skipping controller" controller="node-route-controller"
	I1014 08:47:24.559611   15224 command_runner.go:130] ! I1014 15:22:40.424173       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I1014 08:47:24.559645   15224 command_runner.go:130] ! I1014 15:22:40.425286       1 shared_informer.go:313] Waiting for caches to sync for node
	I1014 08:47:24.559645   15224 command_runner.go:130] ! I1014 15:22:40.570482       1 controllermanager.go:797] "Started controller" controller="persistentvolume-expander-controller"
	I1014 08:47:24.559645   15224 command_runner.go:130] ! I1014 15:22:40.570669       1 expand_controller.go:328] "Starting expand controller" logger="persistentvolume-expander-controller"
	I1014 08:47:24.559704   15224 command_runner.go:130] ! I1014 15:22:40.570684       1 shared_informer.go:313] Waiting for caches to sync for expand
	I1014 08:47:24.559704   15224 command_runner.go:130] ! I1014 15:22:40.718742       1 controllermanager.go:797] "Started controller" controller="ttl-after-finished-controller"
	I1014 08:47:24.559761   15224 command_runner.go:130] ! I1014 15:22:40.718766       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I1014 08:47:24.559761   15224 command_runner.go:130] ! I1014 15:22:40.718828       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I1014 08:47:24.559805   15224 command_runner.go:130] ! I1014 15:22:40.718839       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I1014 08:47:24.559805   15224 command_runner.go:130] ! I1014 15:22:40.875244       1 controllermanager.go:797] "Started controller" controller="endpointslice-controller"
	I1014 08:47:24.559805   15224 command_runner.go:130] ! I1014 15:22:40.875390       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I1014 08:47:24.559869   15224 command_runner.go:130] ! I1014 15:22:40.875405       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.022254       1 controllermanager.go:797] "Started controller" controller="replicaset-controller"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.023099       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.023161       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.176342       1 controllermanager.go:797] "Started controller" controller="statefulset-controller"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.176460       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.176471       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.319171       1 controllermanager.go:797] "Started controller" controller="persistentvolume-protection-controller"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.319300       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.319332       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.469263       1 controllermanager.go:797] "Started controller" controller="clusterrole-aggregation-controller"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.469488       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.470311       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.618471       1 controllermanager.go:797] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.618507       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.619582       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.813364       1 controllermanager.go:797] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:41.813412       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:42.123997       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:42.124656       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:42.125147       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:42.125502       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:42.125684       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:42.125715       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:42.125765       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:42.125789       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I1014 08:47:24.559926   15224 command_runner.go:130] ! I1014 15:22:42.125821       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I1014 08:47:24.560455   15224 command_runner.go:130] ! I1014 15:22:42.125850       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I1014 08:47:24.560505   15224 command_runner.go:130] ! I1014 15:22:42.125919       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I1014 08:47:24.560505   15224 command_runner.go:130] ! I1014 15:22:42.125938       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I1014 08:47:24.560505   15224 command_runner.go:130] ! W1014 15:22:42.125970       1 shared_informer.go:597] resyncPeriod 22h30m25.60471532s is smaller than resyncCheckPeriod 23h6m49.635307948s and the informer has already started. Changing it to 23h6m49.635307948s
	I1014 08:47:24.560505   15224 command_runner.go:130] ! W1014 15:22:42.126028       1 shared_informer.go:597] resyncPeriod 22h40m57.132720005s is smaller than resyncCheckPeriod 23h6m49.635307948s and the informer has already started. Changing it to 23h6m49.635307948s
	I1014 08:47:24.560505   15224 command_runner.go:130] ! I1014 15:22:42.126215       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I1014 08:47:24.560612   15224 command_runner.go:130] ! I1014 15:22:42.126353       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I1014 08:47:24.560612   15224 command_runner.go:130] ! I1014 15:22:42.126435       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I1014 08:47:24.560612   15224 command_runner.go:130] ! I1014 15:22:42.126461       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.126498       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.126514       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.126546       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.126572       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.126591       1 controllermanager.go:797] "Started controller" controller="resourcequota-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.127139       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.127191       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.127239       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.377410       1 controllermanager.go:797] "Started controller" controller="namespace-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.378109       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.378533       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.520088       1 controllermanager.go:797] "Started controller" controller="job-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.520194       1 job_controller.go:226] "Starting job controller" logger="job-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.520661       1 shared_informer.go:313] Waiting for caches to sync for job
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.669141       1 controllermanager.go:797] "Started controller" controller="ttl-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.669227       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.669239       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.713738       1 node_lifecycle_controller.go:430] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.713795       1 controllermanager.go:797] "Started controller" controller="node-lifecycle-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.713972       1 node_lifecycle_controller.go:464] "Sending events to api server" logger="node-lifecycle-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.714019       1 node_lifecycle_controller.go:475] "Starting node controller" logger="node-lifecycle-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.714028       1 shared_informer.go:313] Waiting for caches to sync for taint
	I1014 08:47:24.560676   15224 command_runner.go:130] ! E1014 15:22:42.870353       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:42.870400       1 controllermanager.go:775] "Warning: skipping controller" controller="service-lb-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:43.022018       1 controllermanager.go:797] "Started controller" controller="persistentvolume-attach-detach-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:43.022670       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:43.022756       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:43.169053       1 controllermanager.go:797] "Started controller" controller="ephemeral-volume-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:43.169165       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I1014 08:47:24.560676   15224 command_runner.go:130] ! I1014 15:22:43.169572       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I1014 08:47:24.561203   15224 command_runner.go:130] ! I1014 15:22:43.319453       1 controllermanager.go:797] "Started controller" controller="endpoints-controller"
	I1014 08:47:24.561203   15224 command_runner.go:130] ! I1014 15:22:43.319620       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I1014 08:47:24.561283   15224 command_runner.go:130] ! I1014 15:22:43.319648       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I1014 08:47:24.561283   15224 command_runner.go:130] ! I1014 15:22:43.471065       1 controllermanager.go:797] "Started controller" controller="deployment-controller"
	I1014 08:47:24.561283   15224 command_runner.go:130] ! I1014 15:22:43.471807       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I1014 08:47:24.561283   15224 command_runner.go:130] ! I1014 15:22:43.472102       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I1014 08:47:24.561283   15224 command_runner.go:130] ! I1014 15:22:43.621382       1 controllermanager.go:797] "Started controller" controller="cronjob-controller"
	I1014 08:47:24.561384   15224 command_runner.go:130] ! I1014 15:22:43.621522       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I1014 08:47:24.561407   15224 command_runner.go:130] ! I1014 15:22:43.621537       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I1014 08:47:24.561407   15224 command_runner.go:130] ! I1014 15:22:43.663267       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-approving-controller"
	I1014 08:47:24.561474   15224 command_runner.go:130] ! I1014 15:22:43.663415       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I1014 08:47:24.561500   15224 command_runner.go:130] ! I1014 15:22:43.663427       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.822946       1 controllermanager.go:797] "Started controller" controller="bootstrap-signer-controller"
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.822992       1 controllermanager.go:775] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.823061       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.863507       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.863638       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.863659       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.902554       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.913563       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.916687       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000\" does not exist"
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.921355       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.921578       1 shared_informer.go:320] Caches are synced for cronjob
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.921709       1 shared_informer.go:320] Caches are synced for job
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.921822       1 shared_informer.go:320] Caches are synced for PV protection
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.922806       1 shared_informer.go:320] Caches are synced for PVC protection
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.922814       1 shared_informer.go:320] Caches are synced for TTL after finished
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.924127       1 shared_informer.go:320] Caches are synced for persistent volume
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.924751       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.925596       1 shared_informer.go:320] Caches are synced for node
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.925653       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.925863       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.925961       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.925971       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.927918       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.933656       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.935993       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.939827       1 shared_informer.go:320] Caches are synced for GC
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.945652       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-671000" podCIDRs=["10.244.0.0/24"]
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.945733       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.946434       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.958217       1 shared_informer.go:320] Caches are synced for service account
	I1014 08:47:24.561532   15224 command_runner.go:130] ! I1014 15:22:43.964566       1 shared_informer.go:320] Caches are synced for HPA
	I1014 08:47:24.562071   15224 command_runner.go:130] ! I1014 15:22:43.970909       1 shared_informer.go:320] Caches are synced for TTL
	I1014 08:47:24.562071   15224 command_runner.go:130] ! I1014 15:22:43.971119       1 shared_informer.go:320] Caches are synced for ephemeral
	I1014 08:47:24.562121   15224 command_runner.go:130] ! I1014 15:22:43.971337       1 shared_informer.go:320] Caches are synced for expand
	I1014 08:47:24.562121   15224 command_runner.go:130] ! I1014 15:22:43.975501       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1014 08:47:24.562165   15224 command_runner.go:130] ! I1014 15:22:43.976796       1 shared_informer.go:320] Caches are synced for stateful set
	I1014 08:47:24.562165   15224 command_runner.go:130] ! I1014 15:22:43.978344       1 shared_informer.go:320] Caches are synced for crt configmap
	I1014 08:47:24.562165   15224 command_runner.go:130] ! I1014 15:22:43.978435       1 shared_informer.go:320] Caches are synced for daemon sets
	I1014 08:47:24.562275   15224 command_runner.go:130] ! I1014 15:22:43.980084       1 shared_informer.go:320] Caches are synced for namespace
	I1014 08:47:24.562452   15224 command_runner.go:130] ! I1014 15:22:44.014728       1 shared_informer.go:320] Caches are synced for taint
	I1014 08:47:24.562452   15224 command_runner.go:130] ! I1014 15:22:44.015046       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1014 08:47:24.562452   15224 command_runner.go:130] ! I1014 15:22:44.015932       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000"
	I1014 08:47:24.562452   15224 command_runner.go:130] ! I1014 15:22:44.016156       1 node_lifecycle_controller.go:1036] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1014 08:47:24.562542   15224 command_runner.go:130] ! I1014 15:22:44.020094       1 shared_informer.go:320] Caches are synced for endpoint
	I1014 08:47:24.562542   15224 command_runner.go:130] ! I1014 15:22:44.020640       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1014 08:47:24.562542   15224 command_runner.go:130] ! I1014 15:22:44.071958       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1014 08:47:24.562609   15224 command_runner.go:130] ! I1014 15:22:44.103447       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 08:47:24.562637   15224 command_runner.go:130] ! I1014 15:22:44.118642       1 shared_informer.go:320] Caches are synced for disruption
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:44.123565       1 shared_informer.go:320] Caches are synced for attach detach
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:44.124082       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:44.128052       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:44.164601       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:44.170410       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:44.172085       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:44.172168       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:44.172762       1 shared_informer.go:320] Caches are synced for deployment
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:44.173998       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:44.583260       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:44.634360       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:44.669630       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:44.669841       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:45.450540       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="308.738304ms"
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:45.524372       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="73.173482ms"
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:45.524478       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="65.397µs"
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:46.000395       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="74.724912ms"
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:46.017930       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="16.329807ms"
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:22:46.018255       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="275.988µs"
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:23:06.558708       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:23:06.579629       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:23:06.601705       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="101.399µs"
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:23:06.643522       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="58.099µs"
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:23:08.868021       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="148.904µs"
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:23:08.936155       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="33.695698ms"
	I1014 08:47:24.562664   15224 command_runner.go:130] ! I1014 15:23:08.939220       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="3.012072ms"
	I1014 08:47:24.563195   15224 command_runner.go:130] ! I1014 15:23:09.023157       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1014 08:47:24.563257   15224 command_runner.go:130] ! I1014 15:23:10.921399       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:24.563257   15224 command_runner.go:130] ! I1014 15:25:49.920125       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m02\" does not exist"
	I1014 08:47:24.563257   15224 command_runner.go:130] ! I1014 15:25:49.955308       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-671000-m02" podCIDRs=["10.244.1.0/24"]
	I1014 08:47:24.563257   15224 command_runner.go:130] ! I1014 15:25:49.956041       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:25:49.956493       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:25:50.332394       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:25:50.885049       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:25:54.059204       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:25:54.342262       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:26:00.157293       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:26:18.720546       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:26:18.720611       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:26:18.738467       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:26:19.084143       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:26:20.411603       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:26:44.435156       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="75.721873ms"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:26:44.496244       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="60.852418ms"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:26:44.496945       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="131.501µs"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:26:44.540742       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="117.6µs"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:26:47.680512       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.465591ms"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:26:47.680616       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="27.8µs"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:26:47.878633       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.308091ms"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:26:47.878779       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="30.7µs"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:26:50.724728       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:27:15.823577       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:30:35.115559       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:30:35.116078       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m03\" does not exist"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:30:35.128392       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-671000-m03" podCIDRs=["10.244.2.0/24"]
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:30:35.128677       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:30:35.128924       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:30:35.152829       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:30:35.373296       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:30:35.920577       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:30:39.132287       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000-m03"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:30:39.151825       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:30:45.490553       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:31:04.306000       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:31:04.306453       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m03"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:31:04.323636       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:31:05.841789       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:31:09.153752       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:31:56.911043       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:32:21.316935       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:36:11.719246       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:37:02.446841       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:37:26.676097       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:38:59.261991       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:38:59.262728       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.563401   15224 command_runner.go:130] ! I1014 15:38:59.286871       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:39:04.424423       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:24.025444       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:24.063975       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:29.184402       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:29.185577       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:34.952323       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m03\" does not exist"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:34.952330       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:34.966125       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-671000-m03" podCIDRs=["10.244.3.0/24"]
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:34.966148       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:34.966505       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:34.987165       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:35.003234       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:35.540526       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:39.448073       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:45.343875       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:53.719761       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:53.720945       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:53.741507       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:41:54.369330       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:42:08.557249       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:42:32.770970       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:43:29.631595       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:43:29.632207       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:43:29.853526       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:43:35.163131       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:43:45.119758       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:43:45.151031       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:43:45.251625       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.269341ms"
	I1014 08:47:24.564229   15224 command_runner.go:130] ! I1014 15:43:45.252472       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="121.1µs"
	I1014 08:47:24.584229   15224 logs.go:123] Gathering logs for kubelet ...
	I1014 08:47:24.584229   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 08:47:24.613872   15224 command_runner.go:130] > Oct 14 15:46:05 multinode-671000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1014 08:47:24.613872   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1480]: I1014 15:46:06.037054    1480 server.go:486] "Kubelet version" kubeletVersion="v1.31.1"
	I1014 08:47:24.613872   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1480]: I1014 15:46:06.037147    1480 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:24.613872   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1480]: I1014 15:46:06.038385    1480 server.go:929] "Client rotation is on, will bootstrap in background"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1480]: E1014 15:46:06.039788    1480 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1540]: I1014 15:46:06.835721    1540 server.go:486] "Kubelet version" kubeletVersion="v1.31.1"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1540]: I1014 15:46:06.835931    1540 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1540]: I1014 15:46:06.836250    1540 server.go:929] "Client rotation is on, will bootstrap in background"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1540]: E1014 15:46:06.836463    1540 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:07 multinode-671000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.687712    1622 server.go:486] "Kubelet version" kubeletVersion="v1.31.1"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.688474    1622 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.689105    1622 server.go:929] "Client rotation is on, will bootstrap in background"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.691939    1622 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.718455    1622 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.739709    1622 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.739760    1622 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.744155    1622 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.744395    1622 server.go:812] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.744486    1622 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.744668    1622 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.744761    1622 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-671000","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.746378    1622 topology_manager.go:138] "Creating topology manager with none policy"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.746460    1622 container_manager_linux.go:300] "Creating device plugin manager"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.746633    1622 state_mem.go:36] "Initialized new in-memory state store"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.749964    1622 kubelet.go:408] "Attempting to sync node with API server"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.750004    1622 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.750036    1622 kubelet.go:314] "Adding apiserver pod source"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.750844    1622 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.756693    1622 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="docker" version="27.3.1" apiVersion="v1"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.763816    1622 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: W1014 15:46:09.764725    1622 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.766925    1622 server.go:1269] "Started kubelet"
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: W1014 15:46:09.767088    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-671000&limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:24.614837   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.767172    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-671000&limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: W1014 15:46:09.769189    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.769350    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.769454    1622 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.770134    1622 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.772237    1622 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.20.106.123:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-671000.17fe5c47a6bff791  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-671000,UID:multinode-671000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-671000,},FirstTimestamp:2024-10-14 15:46:09.766881169 +0000 UTC m=+0.158153075,LastTimestamp:2024-10-14 15:46:09.766881169 +0000 UTC m=+0.158153075,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-6
71000,}"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.773096    1622 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.774576    1622 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.777686    1622 server.go:460] "Adding debug handlers to kubelet server"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.780950    1622 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.788697    1622 volume_manager.go:289] "Starting Kubelet Volume Manager"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.789003    1622 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"multinode-671000\" not found"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.789640    1622 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.800447    1622 factory.go:221] Registration of the systemd container factory successfully
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.800536    1622 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.800587    1622 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: W1014 15:46:09.811192    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.811498    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.812017    1622 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-671000?timeout=10s\": dial tcp 172.20.106.123:8443: connect: connection refused" interval="200ms"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.863497    1622 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.863530    1622 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.863554    1622 state_mem.go:36] "Initialized new in-memory state store"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.868881    1622 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.868953    1622 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.868995    1622 policy_none.go:49] "None policy: Start"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.872200    1622 reconciler.go:26] "Reconciler: start to sync state"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.877834    1622 memory_manager.go:170] "Starting memorymanager" policy="None"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.877929    1622 state_mem.go:35] "Initializing new in-memory state store"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.878704    1622 state_mem.go:75] "Updated machine memory state"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.884555    1622 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.885687    1622 eviction_manager.go:189] "Eviction manager: starting control loop"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.885828    1622 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I1014 08:47:24.615856   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.889524    1622 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-671000\" not found"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.892062    1622 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.900012    1622 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.905094    1622 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.906277    1622 status_manager.go:217] "Starting to sync pod status with apiserver"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.906885    1622 kubelet.go:2321] "Starting kubelet main sync loop"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.907458    1622 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: W1014 15:46:09.914061    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.914371    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.933056    1622 iptables.go:577] "Could not set up iptables canary" err=<
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.987581    1622 kubelet_node_status.go:72] "Attempting to register node" node="multinode-671000"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.988812    1622 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.20.106.123:8443: connect: connection refused" node="multinode-671000"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.008458    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f8cc9a218fef4446839b0253827fc2dd89f8eccbd66ab7f0e5123c033f0aacc"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.013887    1622 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-671000?timeout=10s\": dial tcp 172.20.106.123:8443: connect: connection refused" interval="400ms"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.014354    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5733d27d2f1c328dbd19f6392a86e426f344b6f17c65211404fa797e84b69c9"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.014436    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06e529266db4b2cf3ad09285cddeb89d7d11e858b5156cf73f41198c9500b9f2"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.014506    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e48ddcfdf90ad3bfbe621f27c97a331f448947ca77dbd98ab3c9daef2c84e22"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.020161    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dc78387553ff4b78626f5e6aa103a40ec97f42ef49363e27d7d3698cd0df26f"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.035902    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c6be2bd1889b6c5f021362c07c3a88f7f0ff266bb9e8ba4106d666b0f1d267d"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.049024    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1863de70f2316e54fa61ef7c5c6aba94808669b81b1cc811dce745011ee807cb"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.065264    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7144d8ce208cf8c176ad1fc9980a72d450a3d558c4f8f9ee453dea6b22358085"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.079145    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfdde08319e32b93d740933d5ab50829de8f9f3edacce92efe155b4ada4f4212"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.179820    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/864b69f35cb25e9dd5d87a753a055a10-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-671000\" (UID: \"864b69f35cb25e9dd5d87a753a055a10\") " pod="kube-system/kube-apiserver-multinode-671000"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.179915    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b83e31864e0ae98d29d960866012ecb0-k8s-certs\") pod \"kube-controller-manager-multinode-671000\" (UID: \"b83e31864e0ae98d29d960866012ecb0\") " pod="kube-system/kube-controller-manager-multinode-671000"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.179945    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b83e31864e0ae98d29d960866012ecb0-kubeconfig\") pod \"kube-controller-manager-multinode-671000\" (UID: \"b83e31864e0ae98d29d960866012ecb0\") " pod="kube-system/kube-controller-manager-multinode-671000"
	I1014 08:47:24.616835   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.179963    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3e987cfaedc75c39145e8fc131c60c81-kubeconfig\") pod \"kube-scheduler-multinode-671000\" (UID: \"3e987cfaedc75c39145e8fc131c60c81\") " pod="kube-system/kube-scheduler-multinode-671000"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.179984    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/679486a8f27de5805bc2e87fb1920dce-etcd-certs\") pod \"etcd-multinode-671000\" (UID: \"679486a8f27de5805bc2e87fb1920dce\") " pod="kube-system/etcd-multinode-671000"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180012    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/679486a8f27de5805bc2e87fb1920dce-etcd-data\") pod \"etcd-multinode-671000\" (UID: \"679486a8f27de5805bc2e87fb1920dce\") " pod="kube-system/etcd-multinode-671000"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180036    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/864b69f35cb25e9dd5d87a753a055a10-ca-certs\") pod \"kube-apiserver-multinode-671000\" (UID: \"864b69f35cb25e9dd5d87a753a055a10\") " pod="kube-system/kube-apiserver-multinode-671000"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180050    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/864b69f35cb25e9dd5d87a753a055a10-k8s-certs\") pod \"kube-apiserver-multinode-671000\" (UID: \"864b69f35cb25e9dd5d87a753a055a10\") " pod="kube-system/kube-apiserver-multinode-671000"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180068    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b83e31864e0ae98d29d960866012ecb0-ca-certs\") pod \"kube-controller-manager-multinode-671000\" (UID: \"b83e31864e0ae98d29d960866012ecb0\") " pod="kube-system/kube-controller-manager-multinode-671000"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180089    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b83e31864e0ae98d29d960866012ecb0-flexvolume-dir\") pod \"kube-controller-manager-multinode-671000\" (UID: \"b83e31864e0ae98d29d960866012ecb0\") " pod="kube-system/kube-controller-manager-multinode-671000"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180113    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b83e31864e0ae98d29d960866012ecb0-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-671000\" (UID: \"b83e31864e0ae98d29d960866012ecb0\") " pod="kube-system/kube-controller-manager-multinode-671000"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.191857    1622 kubelet_node_status.go:72] "Attempting to register node" node="multinode-671000"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.193195    1622 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.20.106.123:8443: connect: connection refused" node="multinode-671000"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.421148    1622 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-671000?timeout=10s\": dial tcp 172.20.106.123:8443: connect: connection refused" interval="800ms"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.595286    1622 kubelet_node_status.go:72] "Attempting to register node" node="multinode-671000"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.596178    1622 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.20.106.123:8443: connect: connection refused" node="multinode-671000"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: W1014 15:46:10.601172    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.601259    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: W1014 15:46:10.913794    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.913870    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: W1014 15:46:11.078571    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: E1014 15:46:11.078638    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:24.617851   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: W1014 15:46:11.151154    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-671000&limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: E1014 15:46:11.151247    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-671000&limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: E1014 15:46:11.223425    1622 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-671000?timeout=10s\": dial tcp 172.20.106.123:8443: connect: connection refused" interval="1.6s"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: I1014 15:46:11.306759    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0697a11790e806fa4d679e3d97be4c7193692d1cb8f76d882cf3a75aa8e0c238"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: I1014 15:46:11.397496    1622 kubelet_node_status.go:72] "Attempting to register node" node="multinode-671000"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: E1014 15:46:11.399409    1622 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.20.106.123:8443: connect: connection refused" node="multinode-671000"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:13 multinode-671000 kubelet[1622]: I1014 15:46:13.001489    1622 kubelet_node_status.go:72] "Attempting to register node" node="multinode-671000"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.316022    1622 kubelet_node_status.go:111] "Node was previously registered" node="multinode-671000"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.316194    1622 kubelet_node_status.go:75] "Successfully registered node" node="multinode-671000"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.316226    1622 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.317405    1622 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.318741    1622 setters.go:600] "Node became not ready" node="multinode-671000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-10-14T15:46:15Z","lastTransitionTime":"2024-10-14T15:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.671751    1622 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-multinode-671000\" already exists" pod="kube-system/etcd-multinode-671000"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.765668    1622 apiserver.go:52] "Watching apiserver"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.771464    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.772813    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.774456    1622 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-671000" podUID="80ea37b8-9db1-4a39-9e9e-51c01edadfb1"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.790436    1622 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.804744    1622 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-671000"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.875635    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8d14473-8859-4015-84e9-d00656cc00c9-xtables-lock\") pod \"kube-proxy-r74dx\" (UID: \"f8d14473-8859-4015-84e9-d00656cc00c9\") " pod="kube-system/kube-proxy-r74dx"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.875831    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a508bbf9-7565-4c73-98cb-e9684985c298-xtables-lock\") pod \"kindnet-wqrx6\" (UID: \"a508bbf9-7565-4c73-98cb-e9684985c298\") " pod="kube-system/kindnet-wqrx6"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.876217    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8d14473-8859-4015-84e9-d00656cc00c9-lib-modules\") pod \"kube-proxy-r74dx\" (UID: \"f8d14473-8859-4015-84e9-d00656cc00c9\") " pod="kube-system/kube-proxy-r74dx"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.876424    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fde8ff75-bc7f-4db4-b098-c3a08b38d205-tmp\") pod \"storage-provisioner\" (UID: \"fde8ff75-bc7f-4db4-b098-c3a08b38d205\") " pod="kube-system/storage-provisioner"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.876537    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a508bbf9-7565-4c73-98cb-e9684985c298-cni-cfg\") pod \"kindnet-wqrx6\" (UID: \"a508bbf9-7565-4c73-98cb-e9684985c298\") " pod="kube-system/kindnet-wqrx6"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.876562    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a508bbf9-7565-4c73-98cb-e9684985c298-lib-modules\") pod \"kindnet-wqrx6\" (UID: \"a508bbf9-7565-4c73-98cb-e9684985c298\") " pod="kube-system/kindnet-wqrx6"
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.877886    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.877952    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:16.377930736 +0000 UTC m=+6.769202642 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:24.618837   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.896550    1622 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.904462    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.904557    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.904737    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:16.404658149 +0000 UTC m=+6.795930055 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.919872    1622 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cf38ccc62eb74f6e658e1f66ae8cab1" path="/var/lib/kubelet/pods/3cf38ccc62eb74f6e658e1f66ae8cab1/volumes"
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.921055    1622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-671000" podStartSLOduration=0.921039556 podStartE2EDuration="921.039556ms" podCreationTimestamp="2024-10-14 15:46:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-14 15:46:15.920643156 +0000 UTC m=+6.311915162" watchObservedRunningTime="2024-10-14 15:46:15.921039556 +0000 UTC m=+6.312311562"
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.921516    1622 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="778fdb620bffec66f911bf24e3c8210b" path="/var/lib/kubelet/pods/778fdb620bffec66f911bf24e3c8210b/volumes"
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.380142    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.380233    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:17.380214172 +0000 UTC m=+7.771486078 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.480798    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.480831    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.480915    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:17.480897019 +0000 UTC m=+7.872168925 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: I1014 15:46:16.655226    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f8bdf552734e59df94d489a081585cdaeeaee729f0a0ae92105117d4d744f3f"
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: I1014 15:46:16.670380    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdcdd532ba1369fe0e866b321f18e0bd99a88a2699c325eea17a070d48f2f024"
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: I1014 15:46:16.981444    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bcadf1f0885ff3411b477a64c6af366c77eb264ce7a761f174b079b11e2e703"
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: I1014 15:46:16.981500    1622 kubelet.go:1895] "Trying to delete pod" pod="kube-system/etcd-multinode-671000" podUID="56dfdf16-1224-41e3-94de-9d7f4021a17d"
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.982831    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: I1014 15:46:17.011276    1622 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-671000"
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.388224    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.388370    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:19.388351245 +0000 UTC m=+9.779623151 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.489591    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.489649    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.489828    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:19.489808492 +0000 UTC m=+9.881080398 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.915482    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:18 multinode-671000 kubelet[1622]: I1014 15:46:18.163696    1622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-671000" podStartSLOduration=1.163677409 podStartE2EDuration="1.163677409s" podCreationTimestamp="2024-10-14 15:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-14 15:46:18.133766095 +0000 UTC m=+8.525038101" watchObservedRunningTime="2024-10-14 15:46:18.163677409 +0000 UTC m=+8.554949415"
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:18 multinode-671000 kubelet[1622]: E1014 15:46:18.908674    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.405477    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.405614    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:23.405594191 +0000 UTC m=+13.796866097 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:24.619836   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.506858    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.507035    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.507122    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:23.507105839 +0000 UTC m=+13.898377845 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.931507    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:20 multinode-671000 kubelet[1622]: E1014 15:46:20.907760    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:21 multinode-671000 kubelet[1622]: E1014 15:46:21.908994    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:22 multinode-671000 kubelet[1622]: E1014 15:46:22.908657    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.462111    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.462203    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:31.462185592 +0000 UTC m=+21.853457598 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.562508    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.562563    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.562768    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:31.562650785 +0000 UTC m=+21.953922691 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.910119    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:24 multinode-671000 kubelet[1622]: E1014 15:46:24.908917    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:25 multinode-671000 kubelet[1622]: E1014 15:46:25.909505    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:26 multinode-671000 kubelet[1622]: E1014 15:46:26.907750    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:27 multinode-671000 kubelet[1622]: E1014 15:46:27.908822    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:28 multinode-671000 kubelet[1622]: E1014 15:46:28.908219    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:29 multinode-671000 kubelet[1622]: E1014 15:46:29.910218    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:30 multinode-671000 kubelet[1622]: E1014 15:46:30.908259    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.541520    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.541653    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:47.541634578 +0000 UTC m=+37.932906484 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.641930    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.641961    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.620836   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.642009    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:47.641990935 +0000 UTC m=+38.033262841 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.908383    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:32 multinode-671000 kubelet[1622]: E1014 15:46:32.908527    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:33 multinode-671000 kubelet[1622]: E1014 15:46:33.910838    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:34 multinode-671000 kubelet[1622]: E1014 15:46:34.908180    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:35 multinode-671000 kubelet[1622]: E1014 15:46:35.908574    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:36 multinode-671000 kubelet[1622]: E1014 15:46:36.907722    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:37 multinode-671000 kubelet[1622]: E1014 15:46:37.907861    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:38 multinode-671000 kubelet[1622]: E1014 15:46:38.908728    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:39 multinode-671000 kubelet[1622]: E1014 15:46:39.908994    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:40 multinode-671000 kubelet[1622]: E1014 15:46:40.908676    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:41 multinode-671000 kubelet[1622]: E1014 15:46:41.909525    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:42 multinode-671000 kubelet[1622]: E1014 15:46:42.908679    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:43 multinode-671000 kubelet[1622]: E1014 15:46:43.908615    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:44 multinode-671000 kubelet[1622]: E1014 15:46:44.908884    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:45 multinode-671000 kubelet[1622]: E1014 15:46:45.908370    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:46 multinode-671000 kubelet[1622]: E1014 15:46:46.909263    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.573240    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:24.621836   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.573353    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:47:19.573334644 +0000 UTC m=+69.964606650 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.673810    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.673907    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.674014    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:47:19.673994259 +0000 UTC m=+70.065266165 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.908883    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:48 multinode-671000 kubelet[1622]: I1014 15:46:48.486803    1622 scope.go:117] "RemoveContainer" containerID="3d8b7bae48a59c755a1ffda14e7fdd0c2302b394db67b7de21fd5b819dad243b"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:48 multinode-671000 kubelet[1622]: I1014 15:46:48.487259    1622 scope.go:117] "RemoveContainer" containerID="c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:48 multinode-671000 kubelet[1622]: E1014 15:46:48.487448    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(fde8ff75-bc7f-4db4-b098-c3a08b38d205)\"" pod="kube-system/storage-provisioner" podUID="fde8ff75-bc7f-4db4-b098-c3a08b38d205"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:48 multinode-671000 kubelet[1622]: E1014 15:46:48.908732    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:49 multinode-671000 kubelet[1622]: E1014 15:46:49.908877    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:50 multinode-671000 kubelet[1622]: E1014 15:46:50.907718    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:51 multinode-671000 kubelet[1622]: E1014 15:46:51.909552    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:52 multinode-671000 kubelet[1622]: E1014 15:46:52.908818    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:53 multinode-671000 kubelet[1622]: E1014 15:46:53.908389    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:54 multinode-671000 kubelet[1622]: E1014 15:46:54.908089    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:55 multinode-671000 kubelet[1622]: E1014 15:46:55.908582    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:56 multinode-671000 kubelet[1622]: E1014 15:46:56.908839    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:57 multinode-671000 kubelet[1622]: E1014 15:46:57.909489    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:58 multinode-671000 kubelet[1622]: E1014 15:46:58.908804    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:46:59 multinode-671000 kubelet[1622]: I1014 15:46:59.853068    1622 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
	I1014 08:47:24.622848   15224 command_runner.go:130] > Oct 14 15:47:02 multinode-671000 kubelet[1622]: I1014 15:47:02.908981    1622 scope.go:117] "RemoveContainer" containerID="c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6"
	I1014 08:47:24.623844   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]: I1014 15:47:09.901385    1622 scope.go:117] "RemoveContainer" containerID="0b5a6e440d7b67606ed0a4dfa4d07715b1fd7e6f53bc0b8779f86a33c5baf6e9"
	I1014 08:47:24.623844   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]: I1014 15:47:09.946936    1622 scope.go:117] "RemoveContainer" containerID="1ba3cd8bbd5963097f4d674fc98eca21e1a710f5a150a067747aa4e6c922d2fe"
	I1014 08:47:24.623844   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]: E1014 15:47:09.949713    1622 iptables.go:577] "Could not set up iptables canary" err=<
	I1014 08:47:24.623844   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I1014 08:47:24.623844   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I1014 08:47:24.623844   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I1014 08:47:24.623844   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I1014 08:47:24.665830   15224 logs.go:123] Gathering logs for describe nodes ...
	I1014 08:47:24.665830   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 08:47:24.938923   15224 command_runner.go:130] > Name:               multinode-671000
	I1014 08:47:24.938923   15224 command_runner.go:130] > Roles:              control-plane
	I1014 08:47:24.938923   15224 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I1014 08:47:24.938923   15224 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I1014 08:47:24.938923   15224 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I1014 08:47:24.938923   15224 command_runner.go:130] >                     kubernetes.io/hostname=multinode-671000
	I1014 08:47:24.938923   15224 command_runner.go:130] >                     kubernetes.io/os=linux
	I1014 08:47:24.938923   15224 command_runner.go:130] >                     minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	I1014 08:47:24.938923   15224 command_runner.go:130] >                     minikube.k8s.io/name=multinode-671000
	I1014 08:47:24.938923   15224 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I1014 08:47:24.938923   15224 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_10_14T08_22_41_0700
	I1014 08:47:24.938923   15224 command_runner.go:130] >                     minikube.k8s.io/version=v1.34.0
	I1014 08:47:24.938923   15224 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I1014 08:47:24.938923   15224 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I1014 08:47:24.938923   15224 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I1014 08:47:24.938923   15224 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I1014 08:47:24.938923   15224 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I1014 08:47:24.938923   15224 command_runner.go:130] > CreationTimestamp:  Mon, 14 Oct 2024 15:22:36 +0000
	I1014 08:47:24.938923   15224 command_runner.go:130] > Taints:             <none>
	I1014 08:47:24.938923   15224 command_runner.go:130] > Unschedulable:      false
	I1014 08:47:24.938923   15224 command_runner.go:130] > Lease:
	I1014 08:47:24.938923   15224 command_runner.go:130] >   HolderIdentity:  multinode-671000
	I1014 08:47:24.938923   15224 command_runner.go:130] >   AcquireTime:     <unset>
	I1014 08:47:24.938923   15224 command_runner.go:130] >   RenewTime:       Mon, 14 Oct 2024 15:47:16 +0000
	I1014 08:47:24.938923   15224 command_runner.go:130] > Conditions:
	I1014 08:47:24.938923   15224 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I1014 08:47:24.938923   15224 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I1014 08:47:24.938923   15224 command_runner.go:130] >   MemoryPressure   False   Mon, 14 Oct 2024 15:46:59 +0000   Mon, 14 Oct 2024 15:22:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I1014 08:47:24.938923   15224 command_runner.go:130] >   DiskPressure     False   Mon, 14 Oct 2024 15:46:59 +0000   Mon, 14 Oct 2024 15:22:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I1014 08:47:24.938923   15224 command_runner.go:130] >   PIDPressure      False   Mon, 14 Oct 2024 15:46:59 +0000   Mon, 14 Oct 2024 15:22:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I1014 08:47:24.938923   15224 command_runner.go:130] >   Ready            True    Mon, 14 Oct 2024 15:46:59 +0000   Mon, 14 Oct 2024 15:46:59 +0000   KubeletReady                 kubelet is posting ready status
	I1014 08:47:24.938923   15224 command_runner.go:130] > Addresses:
	I1014 08:47:24.938923   15224 command_runner.go:130] >   InternalIP:  172.20.106.123
	I1014 08:47:24.938923   15224 command_runner.go:130] >   Hostname:    multinode-671000
	I1014 08:47:24.938923   15224 command_runner.go:130] > Capacity:
	I1014 08:47:24.938923   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:24.938923   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:24.938923   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:24.938923   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:24.938923   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:24.938923   15224 command_runner.go:130] > Allocatable:
	I1014 08:47:24.938923   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:24.938923   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:24.938923   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:24.938923   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:24.938923   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:24.938923   15224 command_runner.go:130] > System Info:
	I1014 08:47:24.938923   15224 command_runner.go:130] >   Machine ID:                 fc389f3b9e2846b4b909cfc8e7984541
	I1014 08:47:24.938923   15224 command_runner.go:130] >   System UUID:                f72bc210-2ad8-1e4f-ad7d-3e75159c3d98
	I1014 08:47:24.938923   15224 command_runner.go:130] >   Boot ID:                    98d09a99-1eff-402d-837f-6cacdc4463d7
	I1014 08:47:24.938923   15224 command_runner.go:130] >   Kernel Version:             5.10.207
	I1014 08:47:24.938923   15224 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I1014 08:47:24.938923   15224 command_runner.go:130] >   Operating System:           linux
	I1014 08:47:24.938923   15224 command_runner.go:130] >   Architecture:               amd64
	I1014 08:47:24.938923   15224 command_runner.go:130] >   Container Runtime Version:  docker://27.3.1
	I1014 08:47:24.938923   15224 command_runner.go:130] >   Kubelet Version:            v1.31.1
	I1014 08:47:24.938923   15224 command_runner.go:130] >   Kube-Proxy Version:         v1.31.1
	I1014 08:47:24.938923   15224 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I1014 08:47:24.938923   15224 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I1014 08:47:24.938923   15224 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I1014 08:47:24.938923   15224 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I1014 08:47:24.938923   15224 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I1014 08:47:24.938923   15224 command_runner.go:130] >   default                     busybox-7dff88458-vlp7j                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I1014 08:47:24.938923   15224 command_runner.go:130] >   kube-system                 coredns-7c65d6cfc9-fs9ct                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     24m
	I1014 08:47:24.938923   15224 command_runner.go:130] >   kube-system                 etcd-multinode-671000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         67s
	I1014 08:47:24.938923   15224 command_runner.go:130] >   kube-system                 kindnet-wqrx6                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	I1014 08:47:24.938923   15224 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-671000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         69s
	I1014 08:47:24.939921   15224 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-671000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I1014 08:47:24.939921   15224 command_runner.go:130] >   kube-system                 kube-proxy-r74dx                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I1014 08:47:24.939921   15224 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-671000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I1014 08:47:24.939921   15224 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I1014 08:47:24.939921   15224 command_runner.go:130] > Allocated resources:
	I1014 08:47:24.939921   15224 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Resource           Requests     Limits
	I1014 08:47:24.939921   15224 command_runner.go:130] >   --------           --------     ------
	I1014 08:47:24.939921   15224 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I1014 08:47:24.939921   15224 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I1014 08:47:24.939921   15224 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I1014 08:47:24.939921   15224 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I1014 08:47:24.939921   15224 command_runner.go:130] > Events:
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I1014 08:47:24.939921   15224 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  Starting                 24m                kube-proxy       
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  Starting                 66s                kube-proxy       
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node multinode-671000 status is now: NodeHasSufficientMemory
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node multinode-671000 status is now: NodeHasNoDiskPressure
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node multinode-671000 status is now: NodeHasSufficientPID
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-671000 status is now: NodeHasNoDiskPressure
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-671000 status is now: NodeHasSufficientMemory
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-671000 status is now: NodeHasSufficientPID
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  RegisteredNode           24m                node-controller  Node multinode-671000 event: Registered Node multinode-671000 in Controller
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  NodeReady                24m                kubelet          Node multinode-671000 status is now: NodeReady
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  Starting                 75s                kubelet          Starting kubelet.
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  75s                kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  74s (x8 over 75s)  kubelet          Node multinode-671000 status is now: NodeHasSufficientMemory
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    74s (x8 over 75s)  kubelet          Node multinode-671000 status is now: NodeHasNoDiskPressure
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     74s (x7 over 75s)  kubelet          Node multinode-671000 status is now: NodeHasSufficientPID
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Normal  RegisteredNode           66s                node-controller  Node multinode-671000 event: Registered Node multinode-671000 in Controller
	I1014 08:47:24.939921   15224 command_runner.go:130] > Name:               multinode-671000-m02
	I1014 08:47:24.939921   15224 command_runner.go:130] > Roles:              <none>
	I1014 08:47:24.939921   15224 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I1014 08:47:24.939921   15224 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I1014 08:47:24.939921   15224 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I1014 08:47:24.939921   15224 command_runner.go:130] >                     kubernetes.io/hostname=multinode-671000-m02
	I1014 08:47:24.939921   15224 command_runner.go:130] >                     kubernetes.io/os=linux
	I1014 08:47:24.939921   15224 command_runner.go:130] >                     minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	I1014 08:47:24.939921   15224 command_runner.go:130] >                     minikube.k8s.io/name=multinode-671000
	I1014 08:47:24.939921   15224 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I1014 08:47:24.939921   15224 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_10_14T08_25_50_0700
	I1014 08:47:24.939921   15224 command_runner.go:130] >                     minikube.k8s.io/version=v1.34.0
	I1014 08:47:24.939921   15224 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I1014 08:47:24.939921   15224 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I1014 08:47:24.939921   15224 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I1014 08:47:24.939921   15224 command_runner.go:130] > CreationTimestamp:  Mon, 14 Oct 2024 15:25:49 +0000
	I1014 08:47:24.939921   15224 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I1014 08:47:24.939921   15224 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I1014 08:47:24.939921   15224 command_runner.go:130] > Unschedulable:      false
	I1014 08:47:24.939921   15224 command_runner.go:130] > Lease:
	I1014 08:47:24.939921   15224 command_runner.go:130] >   HolderIdentity:  multinode-671000-m02
	I1014 08:47:24.939921   15224 command_runner.go:130] >   AcquireTime:     <unset>
	I1014 08:47:24.939921   15224 command_runner.go:130] >   RenewTime:       Mon, 14 Oct 2024 15:43:00 +0000
	I1014 08:47:24.939921   15224 command_runner.go:130] > Conditions:
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I1014 08:47:24.939921   15224 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I1014 08:47:24.939921   15224 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 14 Oct 2024 15:42:08 +0000   Mon, 14 Oct 2024 15:43:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:24.939921   15224 command_runner.go:130] >   DiskPressure     Unknown   Mon, 14 Oct 2024 15:42:08 +0000   Mon, 14 Oct 2024 15:43:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:24.939921   15224 command_runner.go:130] >   PIDPressure      Unknown   Mon, 14 Oct 2024 15:42:08 +0000   Mon, 14 Oct 2024 15:43:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Ready            Unknown   Mon, 14 Oct 2024 15:42:08 +0000   Mon, 14 Oct 2024 15:43:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:24.939921   15224 command_runner.go:130] > Addresses:
	I1014 08:47:24.939921   15224 command_runner.go:130] >   InternalIP:  172.20.109.137
	I1014 08:47:24.939921   15224 command_runner.go:130] >   Hostname:    multinode-671000-m02
	I1014 08:47:24.939921   15224 command_runner.go:130] > Capacity:
	I1014 08:47:24.939921   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:24.939921   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:24.939921   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:24.939921   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:24.939921   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:24.939921   15224 command_runner.go:130] > Allocatable:
	I1014 08:47:24.939921   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:24.939921   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:24.939921   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:24.940918   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:24.940918   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:24.940918   15224 command_runner.go:130] > System Info:
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Machine ID:                 d57a3470ff3c4f03abf025a19c5c23d9
	I1014 08:47:24.940918   15224 command_runner.go:130] >   System UUID:                d36e677f-0d87-a94f-af28-d8324326f88f
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Boot ID:                    33649cab-156e-4789-8adf-a6fc060b3de7
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Kernel Version:             5.10.207
	I1014 08:47:24.940918   15224 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Operating System:           linux
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Architecture:               amd64
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Container Runtime Version:  docker://27.3.1
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Kubelet Version:            v1.31.1
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Kube-Proxy Version:         v1.31.1
	I1014 08:47:24.940918   15224 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I1014 08:47:24.940918   15224 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I1014 08:47:24.940918   15224 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I1014 08:47:24.940918   15224 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I1014 08:47:24.940918   15224 command_runner.go:130] >   default                     busybox-7dff88458-bnqj6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I1014 08:47:24.940918   15224 command_runner.go:130] >   kube-system                 kindnet-rgbjf              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	I1014 08:47:24.940918   15224 command_runner.go:130] >   kube-system                 kube-proxy-kbpjf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I1014 08:47:24.940918   15224 command_runner.go:130] > Allocated resources:
	I1014 08:47:24.940918   15224 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Resource           Requests   Limits
	I1014 08:47:24.940918   15224 command_runner.go:130] >   --------           --------   ------
	I1014 08:47:24.940918   15224 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I1014 08:47:24.940918   15224 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I1014 08:47:24.940918   15224 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I1014 08:47:24.940918   15224 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I1014 08:47:24.940918   15224 command_runner.go:130] > Events:
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I1014 08:47:24.940918   15224 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-671000-m02 status is now: NodeHasSufficientMemory
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-671000-m02 status is now: NodeHasNoDiskPressure
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-671000-m02 status is now: NodeHasSufficientPID
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-671000-m02 event: Registered Node multinode-671000-m02 in Controller
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Normal  NodeReady                21m                kubelet          Node multinode-671000-m02 status is now: NodeReady
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Normal  NodeNotReady             3m39s              node-controller  Node multinode-671000-m02 status is now: NodeNotReady
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Normal  RegisteredNode           66s                node-controller  Node multinode-671000-m02 event: Registered Node multinode-671000-m02 in Controller
	I1014 08:47:24.940918   15224 command_runner.go:130] > Name:               multinode-671000-m03
	I1014 08:47:24.940918   15224 command_runner.go:130] > Roles:              <none>
	I1014 08:47:24.940918   15224 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I1014 08:47:24.940918   15224 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I1014 08:47:24.940918   15224 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I1014 08:47:24.940918   15224 command_runner.go:130] >                     kubernetes.io/hostname=multinode-671000-m03
	I1014 08:47:24.940918   15224 command_runner.go:130] >                     kubernetes.io/os=linux
	I1014 08:47:24.940918   15224 command_runner.go:130] >                     minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	I1014 08:47:24.940918   15224 command_runner.go:130] >                     minikube.k8s.io/name=multinode-671000
	I1014 08:47:24.940918   15224 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I1014 08:47:24.940918   15224 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_10_14T08_41_35_0700
	I1014 08:47:24.940918   15224 command_runner.go:130] >                     minikube.k8s.io/version=v1.34.0
	I1014 08:47:24.940918   15224 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I1014 08:47:24.940918   15224 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I1014 08:47:24.940918   15224 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I1014 08:47:24.940918   15224 command_runner.go:130] > CreationTimestamp:  Mon, 14 Oct 2024 15:41:34 +0000
	I1014 08:47:24.940918   15224 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I1014 08:47:24.940918   15224 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I1014 08:47:24.940918   15224 command_runner.go:130] > Unschedulable:      false
	I1014 08:47:24.940918   15224 command_runner.go:130] > Lease:
	I1014 08:47:24.940918   15224 command_runner.go:130] >   HolderIdentity:  multinode-671000-m03
	I1014 08:47:24.940918   15224 command_runner.go:130] >   AcquireTime:     <unset>
	I1014 08:47:24.940918   15224 command_runner.go:130] >   RenewTime:       Mon, 14 Oct 2024 15:42:46 +0000
	I1014 08:47:24.940918   15224 command_runner.go:130] > Conditions:
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I1014 08:47:24.940918   15224 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I1014 08:47:24.940918   15224 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 14 Oct 2024 15:41:53 +0000   Mon, 14 Oct 2024 15:43:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:24.940918   15224 command_runner.go:130] >   DiskPressure     Unknown   Mon, 14 Oct 2024 15:41:53 +0000   Mon, 14 Oct 2024 15:43:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:24.940918   15224 command_runner.go:130] >   PIDPressure      Unknown   Mon, 14 Oct 2024 15:41:53 +0000   Mon, 14 Oct 2024 15:43:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Ready            Unknown   Mon, 14 Oct 2024 15:41:53 +0000   Mon, 14 Oct 2024 15:43:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:24.940918   15224 command_runner.go:130] > Addresses:
	I1014 08:47:24.940918   15224 command_runner.go:130] >   InternalIP:  172.20.102.29
	I1014 08:47:24.940918   15224 command_runner.go:130] >   Hostname:    multinode-671000-m03
	I1014 08:47:24.940918   15224 command_runner.go:130] > Capacity:
	I1014 08:47:24.940918   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:24.941933   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:24.941933   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:24.941933   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:24.941933   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:24.941933   15224 command_runner.go:130] > Allocatable:
	I1014 08:47:24.941933   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:24.941933   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:24.941933   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:24.941933   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:24.941933   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:24.941933   15224 command_runner.go:130] > System Info:
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Machine ID:                 6da8cf5e96c04d55b9129d0893534bf2
	I1014 08:47:24.941933   15224 command_runner.go:130] >   System UUID:                49616488-815a-3f43-8f47-13dbf29b6ca7
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Boot ID:                    d9fe58fb-ac8e-4430-9563-1b3e9fd35ffd
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Kernel Version:             5.10.207
	I1014 08:47:24.941933   15224 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Operating System:           linux
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Architecture:               amd64
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Container Runtime Version:  docker://27.3.1
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Kubelet Version:            v1.31.1
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Kube-Proxy Version:         v1.31.1
	I1014 08:47:24.941933   15224 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I1014 08:47:24.941933   15224 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I1014 08:47:24.941933   15224 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I1014 08:47:24.941933   15224 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I1014 08:47:24.941933   15224 command_runner.go:130] >   kube-system                 kindnet-5rqxq       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I1014 08:47:24.941933   15224 command_runner.go:130] >   kube-system                 kube-proxy-n6txs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I1014 08:47:24.941933   15224 command_runner.go:130] > Allocated resources:
	I1014 08:47:24.941933   15224 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Resource           Requests   Limits
	I1014 08:47:24.941933   15224 command_runner.go:130] >   --------           --------   ------
	I1014 08:47:24.941933   15224 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I1014 08:47:24.941933   15224 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I1014 08:47:24.941933   15224 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I1014 08:47:24.941933   15224 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I1014 08:47:24.941933   15224 command_runner.go:130] > Events:
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I1014 08:47:24.941933   15224 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  Starting                 5m46s                  kube-proxy       
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-671000-m03 status is now: NodeHasSufficientMemory
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-671000-m03 status is now: NodeHasNoDiskPressure
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-671000-m03 status is now: NodeHasSufficientPID
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-671000-m03 status is now: NodeReady
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  Starting                 5m50s                  kubelet          Starting kubelet.
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m50s (x2 over 5m50s)  kubelet          Node multinode-671000-m03 status is now: NodeHasSufficientMemory
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m50s (x2 over 5m50s)  kubelet          Node multinode-671000-m03 status is now: NodeHasNoDiskPressure
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m50s (x2 over 5m50s)  kubelet          Node multinode-671000-m03 status is now: NodeHasSufficientPID
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m50s                  kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  RegisteredNode           5m45s                  node-controller  Node multinode-671000-m03 event: Registered Node multinode-671000-m03 in Controller
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  NodeReady                5m31s                  kubelet          Node multinode-671000-m03 status is now: NodeReady
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  NodeNotReady             3m55s                  node-controller  Node multinode-671000-m03 status is now: NodeNotReady
	I1014 08:47:24.941933   15224 command_runner.go:130] >   Normal  RegisteredNode           66s                    node-controller  Node multinode-671000-m03 event: Registered Node multinode-671000-m03 in Controller
	I1014 08:47:24.952935   15224 logs.go:123] Gathering logs for kube-scheduler [661e75bbf6b4] ...
	I1014 08:47:24.952935   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 661e75bbf6b4"
	I1014 08:47:24.987868   15224 command_runner.go:130] ! I1014 15:22:34.688194       1 serving.go:386] Generated self-signed cert in-memory
	I1014 08:47:24.987962   15224 command_runner.go:130] ! W1014 15:22:36.199586       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I1014 08:47:24.988034   15224 command_runner.go:130] ! W1014 15:22:36.199661       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I1014 08:47:24.988034   15224 command_runner.go:130] ! W1014 15:22:36.199675       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	I1014 08:47:24.988034   15224 command_runner.go:130] ! W1014 15:22:36.199681       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 08:47:24.988120   15224 command_runner.go:130] ! I1014 15:22:36.288536       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1014 08:47:24.988120   15224 command_runner.go:130] ! I1014 15:22:36.288649       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:24.988120   15224 command_runner.go:130] ! I1014 15:22:36.292628       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1014 08:47:24.988120   15224 command_runner.go:130] ! I1014 15:22:36.292942       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 08:47:24.988218   15224 command_runner.go:130] ! I1014 15:22:36.293038       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 08:47:24.988218   15224 command_runner.go:130] ! I1014 15:22:36.293102       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 08:47:24.988218   15224 command_runner.go:130] ! W1014 15:22:36.298034       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1014 08:47:24.988344   15224 command_runner.go:130] ! E1014 15:22:36.298090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.988526   15224 command_runner.go:130] ! W1014 15:22:36.298377       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:24.988594   15224 command_runner.go:130] ! E1014 15:22:36.298420       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.988645   15224 command_runner.go:130] ! W1014 15:22:36.298587       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I1014 08:47:24.988645   15224 command_runner.go:130] ! E1014 15:22:36.298642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.988738   15224 command_runner.go:130] ! W1014 15:22:36.298730       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1014 08:47:24.988771   15224 command_runner.go:130] ! E1014 15:22:36.298855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.988771   15224 command_runner.go:130] ! W1014 15:22:36.299272       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1014 08:47:24.988771   15224 command_runner.go:130] ! E1014 15:22:36.299314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.988771   15224 command_runner.go:130] ! W1014 15:22:36.299416       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1014 08:47:24.988771   15224 command_runner.go:130] ! E1014 15:22:36.299618       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.988771   15224 command_runner.go:130] ! W1014 15:22:36.299693       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I1014 08:47:24.988771   15224 command_runner.go:130] ! E1014 15:22:36.299710       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.988771   15224 command_runner.go:130] ! W1014 15:22:36.299857       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1014 08:47:24.988771   15224 command_runner.go:130] ! E1014 15:22:36.299920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.988771   15224 command_runner.go:130] ! W1014 15:22:36.302822       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1014 08:47:24.988771   15224 command_runner.go:130] ! E1014 15:22:36.303096       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1014 08:47:24.988771   15224 command_runner.go:130] ! W1014 15:22:36.303242       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1014 08:47:24.988771   15224 command_runner.go:130] ! E1014 15:22:36.303288       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.988771   15224 command_runner.go:130] ! W1014 15:22:36.303391       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1014 08:47:24.988771   15224 command_runner.go:130] ! E1014 15:22:36.303426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.989315   15224 command_runner.go:130] ! W1014 15:22:36.303605       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:24.989368   15224 command_runner.go:130] ! E1014 15:22:36.303643       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.989423   15224 command_runner.go:130] ! W1014 15:22:36.303739       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:24.989446   15224 command_runner.go:130] ! E1014 15:22:36.303771       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.989446   15224 command_runner.go:130] ! W1014 15:22:36.303825       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1014 08:47:24.989446   15224 command_runner.go:130] ! E1014 15:22:36.303860       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.989446   15224 command_runner.go:130] ! W1014 15:22:36.304041       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:24.989446   15224 command_runner.go:130] ! E1014 15:22:36.304079       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.989446   15224 command_runner.go:130] ! W1014 15:22:37.145637       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:24.989446   15224 command_runner.go:130] ! E1014 15:22:37.146051       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.989446   15224 command_runner.go:130] ! W1014 15:22:37.146415       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1014 08:47:24.989446   15224 command_runner.go:130] ! E1014 15:22:37.146705       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.989446   15224 command_runner.go:130] ! W1014 15:22:37.189116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1014 08:47:24.989446   15224 command_runner.go:130] ! E1014 15:22:37.189252       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.991215   15224 command_runner.go:130] ! W1014 15:22:37.205810       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:24.991215   15224 command_runner.go:130] ! E1014 15:22:37.206152       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.991215   15224 command_runner.go:130] ! W1014 15:22:37.269786       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1014 08:47:24.991215   15224 command_runner.go:130] ! E1014 15:22:37.269856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.991215   15224 command_runner.go:130] ! W1014 15:22:37.270258       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:24.991215   15224 command_runner.go:130] ! E1014 15:22:37.270286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.991215   15224 command_runner.go:130] ! W1014 15:22:37.290857       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1014 08:47:24.991215   15224 command_runner.go:130] ! E1014 15:22:37.290997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.991215   15224 command_runner.go:130] ! W1014 15:22:37.304519       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1014 08:47:24.991215   15224 command_runner.go:130] ! E1014 15:22:37.305020       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1014 08:47:24.991215   15224 command_runner.go:130] ! W1014 15:22:37.328746       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I1014 08:47:24.991215   15224 command_runner.go:130] ! E1014 15:22:37.329049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.991774   15224 command_runner.go:130] ! W1014 15:22:37.338059       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I1014 08:47:24.991859   15224 command_runner.go:130] ! E1014 15:22:37.338269       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.991859   15224 command_runner.go:130] ! W1014 15:22:37.394728       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1014 08:47:24.991859   15224 command_runner.go:130] ! E1014 15:22:37.394803       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.991949   15224 command_runner.go:130] ! W1014 15:22:37.445455       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1014 08:47:24.991949   15224 command_runner.go:130] ! E1014 15:22:37.445612       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.992036   15224 command_runner.go:130] ! W1014 15:22:37.490285       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1014 08:47:24.992036   15224 command_runner.go:130] ! E1014 15:22:37.490817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.992102   15224 command_runner.go:130] ! W1014 15:22:37.537304       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1014 08:47:24.992129   15224 command_runner.go:130] ! E1014 15:22:37.537396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.992162   15224 command_runner.go:130] ! W1014 15:22:37.713713       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:24.992162   15224 command_runner.go:130] ! E1014 15:22:37.713777       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:24.992162   15224 command_runner.go:130] ! I1014 15:22:39.593596       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 08:47:24.992162   15224 command_runner.go:130] ! I1014 15:43:46.388691       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1014 08:47:24.992162   15224 command_runner.go:130] ! I1014 15:43:46.388783       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1014 08:47:24.992162   15224 command_runner.go:130] ! I1014 15:43:46.389141       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 08:47:24.992162   15224 command_runner.go:130] ! E1014 15:43:46.389549       1 run.go:72] "command failed" err="finished without leader elect"
	I1014 08:47:25.003406   15224 logs.go:123] Gathering logs for kube-apiserver [a834664fc8b8] ...
	I1014 08:47:25.003406   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a834664fc8b8"
	I1014 08:47:25.034566   15224 command_runner.go:130] ! I1014 15:46:12.133612       1 options.go:228] external host was not specified, using 172.20.106.123
	I1014 08:47:25.034653   15224 command_runner.go:130] ! I1014 15:46:12.139596       1 server.go:142] Version: v1.31.1
	I1014 08:47:25.034653   15224 command_runner.go:130] ! I1014 15:46:12.140322       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:25.034653   15224 command_runner.go:130] ! I1014 15:46:13.070213       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I1014 08:47:25.034734   15224 command_runner.go:130] ! I1014 15:46:13.112422       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1014 08:47:25.034734   15224 command_runner.go:130] ! I1014 15:46:13.116622       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1014 08:47:25.034734   15224 command_runner.go:130] ! I1014 15:46:13.116890       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1014 08:47:25.034734   15224 command_runner.go:130] ! I1014 15:46:13.117611       1 instance.go:232] Using reconciler: lease
	I1014 08:47:25.034734   15224 command_runner.go:130] ! I1014 15:46:13.606403       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I1014 08:47:25.034734   15224 command_runner.go:130] ! W1014 15:46:13.606961       1 genericapiserver.go:765] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:25.034877   15224 command_runner.go:130] ! I1014 15:46:13.910757       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I1014 08:47:25.034877   15224 command_runner.go:130] ! I1014 15:46:13.911096       1 apis.go:105] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I1014 08:47:25.034877   15224 command_runner.go:130] ! I1014 15:46:14.140196       1 apis.go:105] API group "storagemigration.k8s.io" is not enabled, skipping.
	I1014 08:47:25.034877   15224 command_runner.go:130] ! I1014 15:46:14.332586       1 apis.go:105] API group "resource.k8s.io" is not enabled, skipping.
	I1014 08:47:25.034982   15224 command_runner.go:130] ! I1014 15:46:14.344695       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I1014 08:47:25.034982   15224 command_runner.go:130] ! W1014 15:46:14.344792       1 genericapiserver.go:765] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:25.034982   15224 command_runner.go:130] ! W1014 15:46:14.344802       1 genericapiserver.go:765] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:25.034982   15224 command_runner.go:130] ! I1014 15:46:14.345547       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I1014 08:47:25.034982   15224 command_runner.go:130] ! W1014 15:46:14.345645       1 genericapiserver.go:765] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:25.034982   15224 command_runner.go:130] ! I1014 15:46:14.346729       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I1014 08:47:25.035096   15224 command_runner.go:130] ! I1014 15:46:14.348142       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I1014 08:47:25.035096   15224 command_runner.go:130] ! W1014 15:46:14.348261       1 genericapiserver.go:765] Skipping API autoscaling/v2beta1 because it has no resources.
	I1014 08:47:25.035159   15224 command_runner.go:130] ! W1014 15:46:14.348272       1 genericapiserver.go:765] Skipping API autoscaling/v2beta2 because it has no resources.
	I1014 08:47:25.035186   15224 command_runner.go:130] ! I1014 15:46:14.350632       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I1014 08:47:25.035186   15224 command_runner.go:130] ! W1014 15:46:14.350741       1 genericapiserver.go:765] Skipping API batch/v1beta1 because it has no resources.
	I1014 08:47:25.035236   15224 command_runner.go:130] ! I1014 15:46:14.352378       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I1014 08:47:25.035236   15224 command_runner.go:130] ! W1014 15:46:14.352489       1 genericapiserver.go:765] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:25.035236   15224 command_runner.go:130] ! W1014 15:46:14.352501       1 genericapiserver.go:765] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:25.035236   15224 command_runner.go:130] ! I1014 15:46:14.353674       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I1014 08:47:25.035236   15224 command_runner.go:130] ! W1014 15:46:14.353813       1 genericapiserver.go:765] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:25.035236   15224 command_runner.go:130] ! W1014 15:46:14.353843       1 genericapiserver.go:765] Skipping API coordination.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:25.035342   15224 command_runner.go:130] ! I1014 15:46:14.355117       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I1014 08:47:25.035372   15224 command_runner.go:130] ! W1014 15:46:14.355256       1 genericapiserver.go:765] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:25.035372   15224 command_runner.go:130] ! I1014 15:46:14.358401       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I1014 08:47:25.035372   15224 command_runner.go:130] ! W1014 15:46:14.358517       1 genericapiserver.go:765] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:25.035372   15224 command_runner.go:130] ! W1014 15:46:14.358528       1 genericapiserver.go:765] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:25.035372   15224 command_runner.go:130] ! I1014 15:46:14.359534       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I1014 08:47:25.035372   15224 command_runner.go:130] ! W1014 15:46:14.359632       1 genericapiserver.go:765] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:25.035447   15224 command_runner.go:130] ! W1014 15:46:14.359643       1 genericapiserver.go:765] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:25.035447   15224 command_runner.go:130] ! I1014 15:46:14.360836       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I1014 08:47:25.035474   15224 command_runner.go:130] ! W1014 15:46:14.360942       1 genericapiserver.go:765] Skipping API policy/v1beta1 because it has no resources.
	I1014 08:47:25.035474   15224 command_runner.go:130] ! I1014 15:46:14.363702       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I1014 08:47:25.035474   15224 command_runner.go:130] ! W1014 15:46:14.363848       1 genericapiserver.go:765] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:25.035552   15224 command_runner.go:130] ! W1014 15:46:14.363860       1 genericapiserver.go:765] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:25.035552   15224 command_runner.go:130] ! I1014 15:46:14.364685       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I1014 08:47:25.035580   15224 command_runner.go:130] ! W1014 15:46:14.364801       1 genericapiserver.go:765] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:25.035611   15224 command_runner.go:130] ! W1014 15:46:14.364812       1 genericapiserver.go:765] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:25.035611   15224 command_runner.go:130] ! I1014 15:46:14.368101       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I1014 08:47:25.035651   15224 command_runner.go:130] ! W1014 15:46:14.368216       1 genericapiserver.go:765] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:25.035651   15224 command_runner.go:130] ! W1014 15:46:14.368228       1 genericapiserver.go:765] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:25.035693   15224 command_runner.go:130] ! I1014 15:46:14.370008       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I1014 08:47:25.035693   15224 command_runner.go:130] ! I1014 15:46:14.371702       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I1014 08:47:25.035733   15224 command_runner.go:130] ! W1014 15:46:14.371808       1 genericapiserver.go:765] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I1014 08:47:25.035733   15224 command_runner.go:130] ! W1014 15:46:14.371818       1 genericapiserver.go:765] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:25.035733   15224 command_runner.go:130] ! I1014 15:46:14.376771       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I1014 08:47:25.035733   15224 command_runner.go:130] ! W1014 15:46:14.376868       1 genericapiserver.go:765] Skipping API apps/v1beta2 because it has no resources.
	I1014 08:47:25.035818   15224 command_runner.go:130] ! W1014 15:46:14.376877       1 genericapiserver.go:765] Skipping API apps/v1beta1 because it has no resources.
	I1014 08:47:25.035883   15224 command_runner.go:130] ! I1014 15:46:14.379998       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I1014 08:47:25.035883   15224 command_runner.go:130] ! W1014 15:46:14.380101       1 genericapiserver.go:765] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:25.035915   15224 command_runner.go:130] ! W1014 15:46:14.380112       1 genericapiserver.go:765] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:14.380956       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I1014 08:47:25.035915   15224 command_runner.go:130] ! W1014 15:46:14.381059       1 genericapiserver.go:765] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:14.395072       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I1014 08:47:25.035915   15224 command_runner.go:130] ! W1014 15:46:14.395116       1 genericapiserver.go:765] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.014537       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.014702       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.016123       1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.016823       1 secure_serving.go:213] Serving securely on [::]:8443
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.017426       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.018450       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.018766       1 remote_available_controller.go:411] Starting RemoteAvailability controller
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.018850       1 cache.go:32] Waiting for caches to sync for RemoteAvailability controller
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.018985       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.021391       1 controller.go:119] Starting legacy_token_tracking_controller
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.021471       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.021517       1 aggregator.go:169] waiting for initial CRD sync...
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.022050       1 dynamic_serving_content.go:135] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.022573       1 cluster_authentication_trust_controller.go:443] Starting cluster_authentication_trust_controller controller
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.022688       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.022775       1 system_namespaces_controller.go:66] Starting system namespaces controller
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.026778       1 apf_controller.go:377] Starting API Priority and Fairness config controller
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.027043       1 controller.go:78] Starting OpenAPI AggregationController
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.027942       1 customresource_discovery_controller.go:292] Starting DiscoveryController
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.029402       1 local_available_controller.go:156] Starting LocalAvailability controller
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.029447       1 cache.go:32] Waiting for caches to sync for LocalAvailability controller
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.029815       1 apiservice_controller.go:100] Starting APIServiceRegistrationController
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.029850       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.034040       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.034136       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I1014 08:47:25.035915   15224 command_runner.go:130] ! I1014 15:46:15.034690       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 08:47:25.036468   15224 command_runner.go:130] ! I1014 15:46:15.034946       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1014 08:47:25.036468   15224 command_runner.go:130] ! I1014 15:46:15.082229       1 controller.go:142] Starting OpenAPI controller
	I1014 08:47:25.036468   15224 command_runner.go:130] ! I1014 15:46:15.083838       1 controller.go:90] Starting OpenAPI V3 controller
	I1014 08:47:25.036550   15224 command_runner.go:130] ! I1014 15:46:15.083894       1 naming_controller.go:294] Starting NamingConditionController
	I1014 08:47:25.036550   15224 command_runner.go:130] ! I1014 15:46:15.086443       1 establishing_controller.go:81] Starting EstablishingController
	I1014 08:47:25.036550   15224 command_runner.go:130] ! I1014 15:46:15.087455       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I1014 08:47:25.036550   15224 command_runner.go:130] ! I1014 15:46:15.088333       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1014 08:47:25.036647   15224 command_runner.go:130] ! I1014 15:46:15.092677       1 crd_finalizer.go:269] Starting CRDFinalizer
	I1014 08:47:25.036647   15224 command_runner.go:130] ! I1014 15:46:15.212597       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1014 08:47:25.036647   15224 command_runner.go:130] ! I1014 15:46:15.212691       1 policy_source.go:224] refreshing policies
	I1014 08:47:25.036718   15224 command_runner.go:130] ! I1014 15:46:15.221529       1 shared_informer.go:320] Caches are synced for configmaps
	I1014 08:47:25.036745   15224 command_runner.go:130] ! I1014 15:46:15.226910       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1014 08:47:25.036745   15224 command_runner.go:130] ! I1014 15:46:15.227013       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:15.229937       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:15.231898       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:15.233234       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:15.234375       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:15.235151       1 aggregator.go:171] initial CRD sync complete...
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:15.235400       1 autoregister_controller.go:144] Starting autoregister controller
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:15.235712       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:15.235936       1 cache.go:39] Caches are synced for autoregister controller
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:15.255261       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:15.256039       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:15.271561       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:15.319091       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:16.036564       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 08:47:25.036778   15224 command_runner.go:130] ! W1014 15:46:16.558489       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.20.100.167 172.20.106.123]
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:16.560272       1 controller.go:615] quota admission added evaluator for: endpoints
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:16.573015       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:18.229365       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:18.748102       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:18.793266       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:18.985788       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 08:47:25.036778   15224 command_runner.go:130] ! I1014 15:46:19.024530       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 08:47:25.036778   15224 command_runner.go:130] ! W1014 15:46:36.563040       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.20.106.123]
	I1014 08:47:25.044949   15224 logs.go:123] Gathering logs for kube-scheduler [d428685276e1] ...
	I1014 08:47:25.044949   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d428685276e1"
	I1014 08:47:25.077179   15224 command_runner.go:130] ! I1014 15:46:12.515594       1 serving.go:386] Generated self-signed cert in-memory
	I1014 08:47:25.077179   15224 command_runner.go:130] ! W1014 15:46:15.152686       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I1014 08:47:25.077179   15224 command_runner.go:130] ! W1014 15:46:15.152818       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I1014 08:47:25.077398   15224 command_runner.go:130] ! W1014 15:46:15.152851       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	I1014 08:47:25.077398   15224 command_runner.go:130] ! W1014 15:46:15.153007       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 08:47:25.077398   15224 command_runner.go:130] ! I1014 15:46:15.250163       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1014 08:47:25.077398   15224 command_runner.go:130] ! I1014 15:46:15.250420       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:25.077480   15224 command_runner.go:130] ! I1014 15:46:15.258344       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1014 08:47:25.077480   15224 command_runner.go:130] ! I1014 15:46:15.258735       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 08:47:25.077480   15224 command_runner.go:130] ! I1014 15:46:15.263966       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 08:47:25.077480   15224 command_runner.go:130] ! I1014 15:46:15.258753       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 08:47:25.077480   15224 command_runner.go:130] ! I1014 15:46:15.365145       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 08:47:27.597723   15224 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 08:47:27.628702   15224 command_runner.go:130] > 1906
	I1014 08:47:27.628702   15224 api_server.go:72] duration metric: took 1m6.952574s to wait for apiserver process to appear ...
	I1014 08:47:27.628927   15224 api_server.go:88] waiting for apiserver healthz status ...
	I1014 08:47:27.641529   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 08:47:27.666077   15224 command_runner.go:130] > a834664fc8b8
	I1014 08:47:27.666944   15224 logs.go:282] 1 containers: [a834664fc8b8]
	I1014 08:47:27.676549   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 08:47:27.700651   15224 command_runner.go:130] > 48c8492e231e
	I1014 08:47:27.700744   15224 logs.go:282] 1 containers: [48c8492e231e]
	I1014 08:47:27.711500   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 08:47:27.735082   15224 command_runner.go:130] > 5d223e2e64fc
	I1014 08:47:27.735497   15224 command_runner.go:130] > d9831e9f8ce8
	I1014 08:47:27.735578   15224 logs.go:282] 2 containers: [5d223e2e64fc d9831e9f8ce8]
	I1014 08:47:27.745090   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 08:47:27.771662   15224 command_runner.go:130] > d428685276e1
	I1014 08:47:27.771662   15224 command_runner.go:130] > 661e75bbf6b4
	I1014 08:47:27.774581   15224 logs.go:282] 2 containers: [d428685276e1 661e75bbf6b4]
	I1014 08:47:27.783397   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 08:47:27.807518   15224 command_runner.go:130] > e83db276dec3
	I1014 08:47:27.807518   15224 command_runner.go:130] > ea19428d7036
	I1014 08:47:27.807518   15224 logs.go:282] 2 containers: [e83db276dec3 ea19428d7036]
	I1014 08:47:27.815865   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 08:47:27.841755   15224 command_runner.go:130] > 8af48c446f7e
	I1014 08:47:27.841755   15224 command_runner.go:130] > 712aad669c9f
	I1014 08:47:27.841864   15224 logs.go:282] 2 containers: [8af48c446f7e 712aad669c9f]
	I1014 08:47:27.851510   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 08:47:27.877955   15224 command_runner.go:130] > bba035362eb9
	I1014 08:47:27.877955   15224 command_runner.go:130] > fcdf89a3ac8c
	I1014 08:47:27.878041   15224 logs.go:282] 2 containers: [bba035362eb9 fcdf89a3ac8c]
	I1014 08:47:27.878107   15224 logs.go:123] Gathering logs for etcd [48c8492e231e] ...
	I1014 08:47:27.878107   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48c8492e231e"
	I1014 08:47:27.908030   15224 command_runner.go:130] ! {"level":"warn","ts":"2024-10-14T15:46:11.845953Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1014 08:47:27.908369   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.848739Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.20.106.123:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.20.106.123:2380","--initial-cluster=multinode-671000=https://172.20.106.123:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.20.106.123:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.20.106.123:2380","--name=multinode-671000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I1014 08:47:27.908369   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.848857Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I1014 08:47:27.908556   15224 command_runner.go:130] ! {"level":"warn","ts":"2024-10-14T15:46:11.848886Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1014 08:47:27.908556   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.848900Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://172.20.106.123:2380"]}
	I1014 08:47:27.908556   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.848962Z","caller":"embed/etcd.go:496","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1014 08:47:27.908644   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.854418Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.20.106.123:2379"]}
	I1014 08:47:27.908698   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.857036Z","caller":"embed/etcd.go:310","msg":"starting an etcd server","etcd-version":"3.5.15","git-sha":"9a5533382","go-version":"go1.21.12","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-671000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.20.106.123:2380"],"listen-peer-urls":["https://172.20.106.123:2380"],"advertise-client-urls":["https://172.20.106.123:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.20.106.123:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"in
itial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I1014 08:47:27.908799   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.899392Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"40.66952ms"}
	I1014 08:47:27.908799   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.949173Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I1014 08:47:27.908895   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.984197Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"2dcbff584edb18cc","local-member-id":"782c48cbdf98397b","commit-index":2088}
	I1014 08:47:27.908895   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.985089Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b switched to configuration voters=()"}
	I1014 08:47:27.908895   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.987739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b became follower at term 2"}
	I1014 08:47:27.908989   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.987772Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 782c48cbdf98397b [peers: [], term: 2, commit: 2088, applied: 0, lastindex: 2088, lastterm: 2]"}
	I1014 08:47:27.908989   15224 command_runner.go:130] ! {"level":"warn","ts":"2024-10-14T15:46:12.003567Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I1014 08:47:27.909050   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.010981Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1396}
	I1014 08:47:27.909050   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.025362Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":1813}
	I1014 08:47:27.909116   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.035174Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I1014 08:47:27.909143   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.045608Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"782c48cbdf98397b","timeout":"7s"}
	I1014 08:47:27.909143   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.046705Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"782c48cbdf98397b"}
	I1014 08:47:27.909222   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.046807Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"782c48cbdf98397b","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	I1014 08:47:27.909222   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.047198Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	I1014 08:47:27.909322   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.047977Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I1014 08:47:27.909322   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.048044Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I1014 08:47:27.909322   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.048058Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I1014 08:47:27.909386   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.048736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b switched to configuration voters=(8659376223993477499)"}
	I1014 08:47:27.909445   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.049262Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2dcbff584edb18cc","local-member-id":"782c48cbdf98397b","added-peer-id":"782c48cbdf98397b","added-peer-peer-urls":["https://172.20.100.167:2380"]}
	I1014 08:47:27.909469   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.049694Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2dcbff584edb18cc","local-member-id":"782c48cbdf98397b","cluster-version":"3.5"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.049815Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.056204Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.062166Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.062574Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"782c48cbdf98397b","initial-advertise-peer-urls":["https://172.20.106.123:2380"],"listen-peer-urls":["https://172.20.106.123:2380"],"advertise-client-urls":["https://172.20.106.123:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.20.106.123:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.062654Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.062749Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"172.20.106.123:2380"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.062764Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"172.20.106.123:2380"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b is starting a new election at term 2"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b became pre-candidate at term 2"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b received MsgPreVoteResp from 782c48cbdf98397b at term 2"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b became candidate at term 3"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b received MsgVoteResp from 782c48cbdf98397b at term 3"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b became leader at term 3"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 782c48cbdf98397b elected leader 782c48cbdf98397b at term 3"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.496949Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.496902Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"782c48cbdf98397b","local-member-attributes":"{Name:multinode-671000 ClientURLs:[https://172.20.106.123:2379]}","request-path":"/0/members/782c48cbdf98397b/attributes","cluster-id":"2dcbff584edb18cc","publish-timeout":"7s"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.497822Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.499586Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.499631Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.500815Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.502392Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.20.106.123:2379"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.503879Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1014 08:47:27.909497   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.505686Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I1014 08:47:27.922164   15224 logs.go:123] Gathering logs for coredns [d9831e9f8ce8] ...
	I1014 08:47:27.922164   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9831e9f8ce8"
	I1014 08:47:27.954323   15224 command_runner.go:130] > .:53
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	I1014 08:47:27.954323   15224 command_runner.go:130] > CoreDNS-1.11.3
	I1014 08:47:27.954323   15224 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 127.0.0.1:35483 - 39257 "HINFO IN 8382239991273371198.8905610076788717940. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.074337261s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.1.2:36950 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003062s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.1.2:49277 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.118118924s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.1.2:33122 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.153089702s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.1.2:44549 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.188160849s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.0.3:43390 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.0.3:59817 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000279499s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.0.3:34294 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.0002004s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.0.3:56220 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0002257s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.1.2:44291 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002098s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.1.2:42361 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.17965629s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.1.2:48756 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002923s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.1.2:53437 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000274799s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.1.2:60026 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.013560692s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.1.2:39241 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001752s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.1.2:36696 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0003084s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.1.2:51603 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001109s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.0.3:37516 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002057s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.0.3:41090 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001329s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.0.3:45330 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001397s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.0.3:46520 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000154699s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.0.3:40342 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001347s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.0.3:60385 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001518s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.0.3:56338 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001527s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.0.3:50480 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0002505s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.1.2:60726 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002297s
	I1014 08:47:27.954323   15224 command_runner.go:130] > [INFO] 10.244.1.2:44347 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002249s
	I1014 08:47:27.954900   15224 command_runner.go:130] > [INFO] 10.244.1.2:49092 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001075s
	I1014 08:47:27.954900   15224 command_runner.go:130] > [INFO] 10.244.1.2:54937 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068s
	I1014 08:47:27.954972   15224 command_runner.go:130] > [INFO] 10.244.0.3:59749 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002552s
	I1014 08:47:27.954972   15224 command_runner.go:130] > [INFO] 10.244.0.3:56008 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002499s
	I1014 08:47:27.954972   15224 command_runner.go:130] > [INFO] 10.244.0.3:60338 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001065s
	I1014 08:47:27.954972   15224 command_runner.go:130] > [INFO] 10.244.0.3:49712 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001634s
	I1014 08:47:27.954972   15224 command_runner.go:130] > [INFO] 10.244.1.2:56508 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001577s
	I1014 08:47:27.955083   15224 command_runner.go:130] > [INFO] 10.244.1.2:45387 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001545s
	I1014 08:47:27.955083   15224 command_runner.go:130] > [INFO] 10.244.1.2:39608 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001419s
	I1014 08:47:27.955083   15224 command_runner.go:130] > [INFO] 10.244.1.2:53878 - 5 "PTR IN 1.96.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0001923s
	I1014 08:47:27.955083   15224 command_runner.go:130] > [INFO] 10.244.0.3:58589 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002828s
	I1014 08:47:27.955166   15224 command_runner.go:130] > [INFO] 10.244.0.3:58608 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001609s
	I1014 08:47:27.955166   15224 command_runner.go:130] > [INFO] 10.244.0.3:52599 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000633s
	I1014 08:47:27.955166   15224 command_runner.go:130] > [INFO] 10.244.0.3:58233 - 5 "PTR IN 1.96.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0001058s
	I1014 08:47:27.955166   15224 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I1014 08:47:27.955229   15224 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I1014 08:47:27.957882   15224 logs.go:123] Gathering logs for kube-scheduler [d428685276e1] ...
	I1014 08:47:27.957882   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d428685276e1"
	I1014 08:47:27.985124   15224 command_runner.go:130] ! I1014 15:46:12.515594       1 serving.go:386] Generated self-signed cert in-memory
	I1014 08:47:27.985617   15224 command_runner.go:130] ! W1014 15:46:15.152686       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I1014 08:47:27.985715   15224 command_runner.go:130] ! W1014 15:46:15.152818       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I1014 08:47:27.985892   15224 command_runner.go:130] ! W1014 15:46:15.152851       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	I1014 08:47:27.986047   15224 command_runner.go:130] ! W1014 15:46:15.153007       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 08:47:27.986047   15224 command_runner.go:130] ! I1014 15:46:15.250163       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1014 08:47:27.986047   15224 command_runner.go:130] ! I1014 15:46:15.250420       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:27.986144   15224 command_runner.go:130] ! I1014 15:46:15.258344       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1014 08:47:27.986190   15224 command_runner.go:130] ! I1014 15:46:15.258735       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 08:47:27.986190   15224 command_runner.go:130] ! I1014 15:46:15.263966       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 08:47:27.986190   15224 command_runner.go:130] ! I1014 15:46:15.258753       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 08:47:27.986250   15224 command_runner.go:130] ! I1014 15:46:15.365145       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 08:47:27.989677   15224 logs.go:123] Gathering logs for kube-proxy [ea19428d7036] ...
	I1014 08:47:27.989741   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea19428d7036"
	I1014 08:47:28.014957   15224 command_runner.go:130] ! I1014 15:22:47.466748       1 server_linux.go:66] "Using iptables proxy"
	I1014 08:47:28.015020   15224 command_runner.go:130] ! E1014 15:22:47.511018       1 proxier.go:734] "Error cleaning up nftables rules" err=<
	I1014 08:47:28.015020   15224 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I1014 08:47:28.015020   15224 command_runner.go:130] ! 	add table ip kube-proxy
	I1014 08:47:28.015097   15224 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I1014 08:47:28.015097   15224 command_runner.go:130] !  >
	I1014 08:47:28.015097   15224 command_runner.go:130] ! E1014 15:22:47.546236       1 proxier.go:734] "Error cleaning up nftables rules" err=<
	I1014 08:47:28.015097   15224 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I1014 08:47:28.015097   15224 command_runner.go:130] ! 	add table ip6 kube-proxy
	I1014 08:47:28.015097   15224 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I1014 08:47:28.015172   15224 command_runner.go:130] !  >
	I1014 08:47:28.015172   15224 command_runner.go:130] ! I1014 15:22:47.606437       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.20.100.167"]
	I1014 08:47:28.015201   15224 command_runner.go:130] ! E1014 15:22:47.606505       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 08:47:28.015233   15224 command_runner.go:130] ! I1014 15:22:47.788755       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 08:47:28.015233   15224 command_runner.go:130] ! I1014 15:22:47.788820       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 08:47:28.015233   15224 command_runner.go:130] ! I1014 15:22:47.788852       1 server_linux.go:169] "Using iptables Proxier"
	I1014 08:47:28.015233   15224 command_runner.go:130] ! I1014 15:22:47.807050       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 08:47:28.015233   15224 command_runner.go:130] ! I1014 15:22:47.807424       1 server.go:483] "Version info" version="v1.31.1"
	I1014 08:47:28.015233   15224 command_runner.go:130] ! I1014 15:22:47.807440       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:28.015233   15224 command_runner.go:130] ! I1014 15:22:47.810361       1 config.go:199] "Starting service config controller"
	I1014 08:47:28.015233   15224 command_runner.go:130] ! I1014 15:22:47.810416       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 08:47:28.015233   15224 command_runner.go:130] ! I1014 15:22:47.810600       1 config.go:105] "Starting endpoint slice config controller"
	I1014 08:47:28.015233   15224 command_runner.go:130] ! I1014 15:22:47.810629       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 08:47:28.015233   15224 command_runner.go:130] ! I1014 15:22:47.811297       1 config.go:328] "Starting node config controller"
	I1014 08:47:28.015233   15224 command_runner.go:130] ! I1014 15:22:47.811309       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 08:47:28.015233   15224 command_runner.go:130] ! I1014 15:22:47.911443       1 shared_informer.go:320] Caches are synced for node config
	I1014 08:47:28.015233   15224 command_runner.go:130] ! I1014 15:22:47.911791       1 shared_informer.go:320] Caches are synced for service config
	I1014 08:47:28.015233   15224 command_runner.go:130] ! I1014 15:22:47.911866       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 08:47:28.018533   15224 logs.go:123] Gathering logs for kube-controller-manager [8af48c446f7e] ...
	I1014 08:47:28.018533   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af48c446f7e"
	I1014 08:47:28.046117   15224 command_runner.go:130] ! I1014 15:46:12.989235       1 serving.go:386] Generated self-signed cert in-memory
	I1014 08:47:28.046922   15224 command_runner.go:130] ! I1014 15:46:13.820617       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I1014 08:47:28.046922   15224 command_runner.go:130] ! I1014 15:46:13.820897       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:28.046922   15224 command_runner.go:130] ! I1014 15:46:13.823101       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1014 08:47:28.046922   15224 command_runner.go:130] ! I1014 15:46:13.823494       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 08:47:28.047120   15224 command_runner.go:130] ! I1014 15:46:13.824132       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1014 08:47:28.047295   15224 command_runner.go:130] ! I1014 15:46:13.824214       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 08:47:28.047444   15224 command_runner.go:130] ! I1014 15:46:17.208145       1 controllermanager.go:797] "Started controller" controller="serviceaccount-token-controller"
	I1014 08:47:28.047444   15224 command_runner.go:130] ! I1014 15:46:17.211496       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I1014 08:47:28.050136   15224 command_runner.go:130] ! I1014 15:46:17.268813       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I1014 08:47:28.051196   15224 command_runner.go:130] ! I1014 15:46:17.269727       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I1014 08:47:28.051196   15224 command_runner.go:130] ! I1014 15:46:17.270864       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I1014 08:47:28.051196   15224 command_runner.go:130] ! I1014 15:46:17.271094       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I1014 08:47:28.051315   15224 command_runner.go:130] ! I1014 15:46:17.271857       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I1014 08:47:28.051315   15224 command_runner.go:130] ! I1014 15:46:17.271962       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I1014 08:47:28.051348   15224 command_runner.go:130] ! I1014 15:46:17.272049       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I1014 08:47:28.051348   15224 command_runner.go:130] ! I1014 15:46:17.272075       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I1014 08:47:28.051348   15224 command_runner.go:130] ! I1014 15:46:17.273540       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I1014 08:47:28.051348   15224 command_runner.go:130] ! I1014 15:46:17.274245       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I1014 08:47:28.051448   15224 command_runner.go:130] ! I1014 15:46:17.274579       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I1014 08:47:28.051448   15224 command_runner.go:130] ! I1014 15:46:17.274747       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I1014 08:47:28.051520   15224 command_runner.go:130] ! I1014 15:46:17.274772       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I1014 08:47:28.051548   15224 command_runner.go:130] ! I1014 15:46:17.275348       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I1014 08:47:28.051548   15224 command_runner.go:130] ! I1014 15:46:17.275380       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I1014 08:47:28.051609   15224 command_runner.go:130] ! I1014 15:46:17.275397       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I1014 08:47:28.051609   15224 command_runner.go:130] ! I1014 15:46:17.275571       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I1014 08:47:28.051673   15224 command_runner.go:130] ! I1014 15:46:17.275603       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I1014 08:47:28.051673   15224 command_runner.go:130] ! W1014 15:46:17.275618       1 shared_informer.go:597] resyncPeriod 13h32m18.096579392s is smaller than resyncCheckPeriod 20h55m54.648340273s and the informer has already started. Changing it to 20h55m54.648340273s
	I1014 08:47:28.051739   15224 command_runner.go:130] ! I1014 15:46:17.276096       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I1014 08:47:28.051739   15224 command_runner.go:130] ! I1014 15:46:17.276150       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I1014 08:47:28.051739   15224 command_runner.go:130] ! I1014 15:46:17.276197       1 controllermanager.go:797] "Started controller" controller="resourcequota-controller"
	I1014 08:47:28.051808   15224 command_runner.go:130] ! I1014 15:46:17.276213       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I1014 08:47:28.051808   15224 command_runner.go:130] ! I1014 15:46:17.276260       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1014 08:47:28.051808   15224 command_runner.go:130] ! I1014 15:46:17.276359       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I1014 08:47:28.051808   15224 command_runner.go:130] ! I1014 15:46:17.283642       1 controllermanager.go:797] "Started controller" controller="replicaset-controller"
	I1014 08:47:28.051874   15224 command_runner.go:130] ! I1014 15:46:17.284697       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I1014 08:47:28.051874   15224 command_runner.go:130] ! I1014 15:46:17.284913       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I1014 08:47:28.051874   15224 command_runner.go:130] ! I1014 15:46:17.288417       1 controllermanager.go:797] "Started controller" controller="cronjob-controller"
	I1014 08:47:28.051942   15224 command_runner.go:130] ! I1014 15:46:17.289073       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I1014 08:47:28.051942   15224 command_runner.go:130] ! I1014 15:46:17.289091       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I1014 08:47:28.051942   15224 command_runner.go:130] ! I1014 15:46:17.292212       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-approving-controller"
	I1014 08:47:28.051942   15224 command_runner.go:130] ! I1014 15:46:17.292573       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I1014 08:47:28.051942   15224 command_runner.go:130] ! I1014 15:46:17.292591       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I1014 08:47:28.051942   15224 command_runner.go:130] ! I1014 15:46:17.295276       1 controllermanager.go:797] "Started controller" controller="clusterrole-aggregation-controller"
	I1014 08:47:28.052030   15224 command_runner.go:130] ! I1014 15:46:17.295785       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I1014 08:47:28.052030   15224 command_runner.go:130] ! I1014 15:46:17.298756       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I1014 08:47:28.052030   15224 command_runner.go:130] ! I1014 15:46:17.299107       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I1014 08:47:28.052097   15224 command_runner.go:130] ! I1014 15:46:17.299997       1 controllermanager.go:797] "Started controller" controller="endpointslice-mirroring-controller"
	I1014 08:47:28.052136   15224 command_runner.go:130] ! I1014 15:46:17.302040       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I1014 08:47:28.052161   15224 command_runner.go:130] ! I1014 15:46:17.302058       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I1014 08:47:28.052161   15224 command_runner.go:130] ! I1014 15:46:17.305668       1 controllermanager.go:797] "Started controller" controller="replicationcontroller-controller"
	I1014 08:47:28.052161   15224 command_runner.go:130] ! I1014 15:46:17.308801       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I1014 08:47:28.052226   15224 command_runner.go:130] ! I1014 15:46:17.308819       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I1014 08:47:28.052226   15224 command_runner.go:130] ! I1014 15:46:17.318320       1 shared_informer.go:320] Caches are synced for tokens
	I1014 08:47:28.052226   15224 command_runner.go:130] ! I1014 15:46:17.329856       1 controllermanager.go:797] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I1014 08:47:28.052226   15224 command_runner.go:130] ! I1014 15:46:17.330990       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I1014 08:47:28.052226   15224 command_runner.go:130] ! I1014 15:46:17.331395       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I1014 08:47:28.052226   15224 command_runner.go:130] ! I1014 15:46:17.345566       1 controllermanager.go:797] "Started controller" controller="disruption-controller"
	I1014 08:47:28.052226   15224 command_runner.go:130] ! I1014 15:46:17.345806       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I1014 08:47:28.052226   15224 command_runner.go:130] ! I1014 15:46:17.345841       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I1014 08:47:28.052226   15224 command_runner.go:130] ! I1014 15:46:17.345937       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I1014 08:47:28.052226   15224 command_runner.go:130] ! E1014 15:46:17.350088       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I1014 08:47:28.052381   15224 command_runner.go:130] ! I1014 15:46:17.350237       1 controllermanager.go:775] "Warning: skipping controller" controller="service-lb-controller"
	I1014 08:47:28.052381   15224 command_runner.go:130] ! I1014 15:46:17.350277       1 controllermanager.go:775] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I1014 08:47:28.052381   15224 command_runner.go:130] ! I1014 15:46:17.359040       1 controllermanager.go:797] "Started controller" controller="endpoints-controller"
	I1014 08:47:28.052381   15224 command_runner.go:130] ! I1014 15:46:17.360243       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I1014 08:47:28.052381   15224 command_runner.go:130] ! I1014 15:46:17.360265       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I1014 08:47:28.052473   15224 command_runner.go:130] ! I1014 15:46:17.362115       1 controllermanager.go:797] "Started controller" controller="pod-garbage-collector-controller"
	I1014 08:47:28.052473   15224 command_runner.go:130] ! I1014 15:46:17.362235       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I1014 08:47:28.052473   15224 command_runner.go:130] ! I1014 15:46:17.362245       1 shared_informer.go:313] Waiting for caches to sync for GC
	I1014 08:47:28.052473   15224 command_runner.go:130] ! I1014 15:46:17.364537       1 controllermanager.go:797] "Started controller" controller="serviceaccount-controller"
	I1014 08:47:28.052541   15224 command_runner.go:130] ! I1014 15:46:17.364725       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I1014 08:47:28.052541   15224 command_runner.go:130] ! I1014 15:46:17.364738       1 shared_informer.go:313] Waiting for caches to sync for service account
	I1014 08:47:28.052609   15224 command_runner.go:130] ! I1014 15:46:17.367152       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I1014 08:47:28.052609   15224 command_runner.go:130] ! I1014 15:46:17.367373       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I1014 08:47:28.052609   15224 command_runner.go:130] ! I1014 15:46:17.369619       1 controllermanager.go:797] "Started controller" controller="bootstrap-signer-controller"
	I1014 08:47:28.052609   15224 command_runner.go:130] ! I1014 15:46:17.370097       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I1014 08:47:28.052609   15224 command_runner.go:130] ! I1014 15:46:17.373109       1 controllermanager.go:797] "Started controller" controller="token-cleaner-controller"
	I1014 08:47:28.052673   15224 command_runner.go:130] ! I1014 15:46:17.373475       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I1014 08:47:28.052673   15224 command_runner.go:130] ! I1014 15:46:17.373486       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I1014 08:47:28.052741   15224 command_runner.go:130] ! I1014 15:46:17.373493       1 shared_informer.go:320] Caches are synced for token_cleaner
	I1014 08:47:28.052741   15224 command_runner.go:130] ! I1014 15:46:17.375506       1 controllermanager.go:797] "Started controller" controller="ttl-after-finished-controller"
	I1014 08:47:28.052741   15224 command_runner.go:130] ! I1014 15:46:17.375684       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I1014 08:47:28.052741   15224 command_runner.go:130] ! I1014 15:46:17.375694       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I1014 08:47:28.052807   15224 command_runner.go:130] ! I1014 15:46:17.379552       1 controllermanager.go:797] "Started controller" controller="ephemeral-volume-controller"
	I1014 08:47:28.052807   15224 command_runner.go:130] ! I1014 15:46:17.380063       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I1014 08:47:28.052807   15224 command_runner.go:130] ! I1014 15:46:17.380270       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I1014 08:47:28.052807   15224 command_runner.go:130] ! I1014 15:46:17.413079       1 controllermanager.go:797] "Started controller" controller="namespace-controller"
	I1014 08:47:28.052875   15224 command_runner.go:130] ! I1014 15:46:17.413676       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I1014 08:47:28.052875   15224 command_runner.go:130] ! I1014 15:46:17.415689       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I1014 08:47:28.052875   15224 command_runner.go:130] ! I1014 15:46:17.418729       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I1014 08:47:28.052875   15224 command_runner.go:130] ! I1014 15:46:17.418858       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I1014 08:47:28.052948   15224 command_runner.go:130] ! I1014 15:46:17.418983       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:28.052948   15224 command_runner.go:130] ! I1014 15:46:17.420448       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-signing-controller"
	I1014 08:47:28.052948   15224 command_runner.go:130] ! I1014 15:46:17.420573       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I1014 08:47:28.052948   15224 command_runner.go:130] ! I1014 15:46:17.420658       1 controllermanager.go:775] "Warning: skipping controller" controller="node-route-controller"
	I1014 08:47:28.053081   15224 command_runner.go:130] ! I1014 15:46:17.420878       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I1014 08:47:28.053081   15224 command_runner.go:130] ! I1014 15:46:17.422022       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I1014 08:47:28.053081   15224 command_runner.go:130] ! I1014 15:46:17.422169       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I1014 08:47:28.053081   15224 command_runner.go:130] ! I1014 15:46:17.422636       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I1014 08:47:28.053188   15224 command_runner.go:130] ! I1014 15:46:17.425521       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:28.053188   15224 command_runner.go:130] ! I1014 15:46:17.425557       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I1014 08:47:28.053188   15224 command_runner.go:130] ! I1014 15:46:17.425747       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I1014 08:47:28.053250   15224 command_runner.go:130] ! I1014 15:46:17.425569       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:28.053250   15224 command_runner.go:130] ! I1014 15:46:17.425577       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:28.053283   15224 command_runner.go:130] ! E1014 15:46:17.429609       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.429771       1 controllermanager.go:775] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.432720       1 controllermanager.go:797] "Started controller" controller="persistentvolume-attach-detach-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.433242       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.433509       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.437867       1 controllermanager.go:797] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.438432       1 pvc_protection_controller.go:105] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.438754       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.466996       1 controllermanager.go:797] "Started controller" controller="garbage-collector-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.467178       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.467191       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.467211       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.513974       1 controllermanager.go:797] "Started controller" controller="daemonset-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.514092       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.514103       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.612272       1 controllermanager.go:797] "Started controller" controller="deployment-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.612390       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.612405       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.715625       1 controllermanager.go:797] "Started controller" controller="ttl-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.718491       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.718512       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.762259       1 node_lifecycle_controller.go:430] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.762792       1 controllermanager.go:797] "Started controller" controller="node-lifecycle-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.763108       1 node_lifecycle_controller.go:464] "Sending events to api server" logger="node-lifecycle-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.763488       1 node_lifecycle_controller.go:475] "Starting node controller" logger="node-lifecycle-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.763636       1 shared_informer.go:313] Waiting for caches to sync for taint
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.815269       1 controllermanager.go:797] "Started controller" controller="persistentvolume-expander-controller"
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.815926       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I1014 08:47:28.053307   15224 command_runner.go:130] ! I1014 15:46:17.815820       1 expand_controller.go:328] "Starting expand controller" logger="persistentvolume-expander-controller"
	I1014 08:47:28.053859   15224 command_runner.go:130] ! I1014 15:46:17.815981       1 shared_informer.go:313] Waiting for caches to sync for expand
	I1014 08:47:28.053904   15224 command_runner.go:130] ! I1014 15:46:17.865803       1 controllermanager.go:797] "Started controller" controller="taint-eviction-controller"
	I1014 08:47:28.053904   15224 command_runner.go:130] ! I1014 15:46:17.865833       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I1014 08:47:28.053904   15224 command_runner.go:130] ! I1014 15:46:17.865908       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I1014 08:47:28.053904   15224 command_runner.go:130] ! I1014 15:46:17.865945       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I1014 08:47:28.053966   15224 command_runner.go:130] ! I1014 15:46:17.865986       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I1014 08:47:28.053966   15224 command_runner.go:130] ! I1014 15:46:17.923932       1 controllermanager.go:797] "Started controller" controller="job-controller"
	I1014 08:47:28.053966   15224 command_runner.go:130] ! I1014 15:46:17.924153       1 job_controller.go:226] "Starting job controller" logger="job-controller"
	I1014 08:47:28.053966   15224 command_runner.go:130] ! I1014 15:46:17.924184       1 shared_informer.go:313] Waiting for caches to sync for job
	I1014 08:47:28.054040   15224 command_runner.go:130] ! I1014 15:46:17.978728       1 controllermanager.go:797] "Started controller" controller="root-ca-certificate-publisher-controller"
	I1014 08:47:28.054040   15224 command_runner.go:130] ! I1014 15:46:17.978796       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I1014 08:47:28.054040   15224 command_runner.go:130] ! I1014 15:46:17.978809       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I1014 08:47:28.054094   15224 command_runner.go:130] ! I1014 15:46:18.018003       1 controllermanager.go:797] "Started controller" controller="endpointslice-controller"
	I1014 08:47:28.054117   15224 command_runner.go:130] ! I1014 15:46:18.018177       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I1014 08:47:28.054117   15224 command_runner.go:130] ! I1014 15:46:18.018192       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I1014 08:47:28.054182   15224 command_runner.go:130] ! I1014 15:46:18.077409       1 controllermanager.go:797] "Started controller" controller="statefulset-controller"
	I1014 08:47:28.054182   15224 command_runner.go:130] ! I1014 15:46:18.078007       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I1014 08:47:28.054182   15224 command_runner.go:130] ! I1014 15:46:18.078026       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I1014 08:47:28.054261   15224 command_runner.go:130] ! I1014 15:46:18.245465       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I1014 08:47:28.054282   15224 command_runner.go:130] ! I1014 15:46:18.246368       1 controllermanager.go:797] "Started controller" controller="node-ipam-controller"
	I1014 08:47:28.054282   15224 command_runner.go:130] ! I1014 15:46:18.246712       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I1014 08:47:28.054282   15224 command_runner.go:130] ! I1014 15:46:18.246910       1 shared_informer.go:313] Waiting for caches to sync for node
	I1014 08:47:28.054282   15224 command_runner.go:130] ! I1014 15:46:18.264869       1 controllermanager.go:797] "Started controller" controller="persistentvolume-binder-controller"
	I1014 08:47:28.054342   15224 command_runner.go:130] ! I1014 15:46:18.264984       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I1014 08:47:28.054367   15224 command_runner.go:130] ! I1014 15:46:18.266232       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I1014 08:47:28.054367   15224 command_runner.go:130] ! I1014 15:46:18.321121       1 controllermanager.go:797] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I1014 08:47:28.054419   15224 command_runner.go:130] ! I1014 15:46:18.323482       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I1014 08:47:28.054442   15224 command_runner.go:130] ! I1014 15:46:18.323903       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I1014 08:47:28.054442   15224 command_runner.go:130] ! I1014 15:46:18.431796       1 controllermanager.go:797] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.431873       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.465851       1 controllermanager.go:797] "Started controller" controller="persistentvolume-protection-controller"
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.468767       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.469028       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.485571       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.534720       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.539015       1 shared_informer.go:320] Caches are synced for PVC protection
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.541399       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000\" does not exist"
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.541615       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m02\" does not exist"
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.549102       1 shared_informer.go:320] Caches are synced for TTL
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.549549       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m03\" does not exist"
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.550590       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.551387       1 shared_informer.go:320] Caches are synced for node
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.554673       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.557592       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.558471       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.558669       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.559066       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.559166       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.559144       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.560823       1 shared_informer.go:320] Caches are synced for endpoint
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.563147       1 shared_informer.go:320] Caches are synced for GC
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.566072       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.566447       1 shared_informer.go:320] Caches are synced for persistent volume
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.566267       1 shared_informer.go:320] Caches are synced for service account
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.570369       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.570522       1 shared_informer.go:320] Caches are synced for PV protection
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.577368       1 shared_informer.go:320] Caches are synced for TTL after finished
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.580187       1 shared_informer.go:320] Caches are synced for crt configmap
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.580534       1 shared_informer.go:320] Caches are synced for ephemeral
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.585372       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.593972       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1014 08:47:28.054471   15224 command_runner.go:130] ! I1014 15:46:18.595014       1 shared_informer.go:320] Caches are synced for cronjob
	I1014 08:47:28.055031   15224 command_runner.go:130] ! I1014 15:46:18.600012       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1014 08:47:28.055031   15224 command_runner.go:130] ! I1014 15:46:18.602930       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1014 08:47:28.055091   15224 command_runner.go:130] ! I1014 15:46:18.609680       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1014 08:47:28.055091   15224 command_runner.go:130] ! I1014 15:46:18.613447       1 shared_informer.go:320] Caches are synced for deployment
	I1014 08:47:28.055091   15224 command_runner.go:130] ! I1014 15:46:18.616246       1 shared_informer.go:320] Caches are synced for expand
	I1014 08:47:28.055091   15224 command_runner.go:130] ! I1014 15:46:18.616739       1 shared_informer.go:320] Caches are synced for namespace
	I1014 08:47:28.055091   15224 command_runner.go:130] ! I1014 15:46:18.618534       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1014 08:47:28.055091   15224 command_runner.go:130] ! I1014 15:46:18.625249       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1014 08:47:28.055188   15224 command_runner.go:130] ! I1014 15:46:18.630423       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1014 08:47:28.055221   15224 command_runner.go:130] ! I1014 15:46:18.632938       1 shared_informer.go:320] Caches are synced for HPA
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.633193       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.634381       1 shared_informer.go:320] Caches are synced for job
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.634623       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.634920       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.649619       1 shared_informer.go:320] Caches are synced for disruption
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.668155       1 shared_informer.go:320] Caches are synced for taint
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.670026       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.680357       1 shared_informer.go:320] Caches are synced for stateful set
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.700582       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.708812       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.714134       1 shared_informer.go:320] Caches are synced for daemon sets
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.718536       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.718841       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000-m02"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.719036       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000-m03"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.721210       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="135.448763ms"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.721514       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="40.1µs"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.721809       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="136.173363ms"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.722033       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.5µs"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.722234       1 node_lifecycle_controller.go:1036] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.777385       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.786812       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:18.833914       1 shared_informer.go:320] Caches are synced for attach detach
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:19.252391       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:19.267855       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:19.268119       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:59.871635       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:46:59.892163       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:47:03.736416       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:47:13.821153       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:47:20.979721       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="114.5µs"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:47:22.061324       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.05527ms"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:47:22.062652       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.8µs"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:47:22.098955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="28.422114ms"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:47:22.099794       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="313.699µs"
	I1014 08:47:28.055251   15224 command_runner.go:130] ! I1014 15:47:23.920002       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.072615   15224 logs.go:123] Gathering logs for kindnet [fcdf89a3ac8c] ...
	I1014 08:47:28.072615   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcdf89a3ac8c"
	I1014 08:47:28.105389   15224 command_runner.go:130] ! I1014 15:32:44.862261       1 main.go:300] handling current node
	I1014 08:47:28.105857   15224 command_runner.go:130] ! I1014 15:32:44.862301       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.105857   15224 command_runner.go:130] ! I1014 15:32:44.862313       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.105857   15224 command_runner.go:130] ! I1014 15:32:44.862605       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.105857   15224 command_runner.go:130] ! I1014 15:32:44.862636       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.105857   15224 command_runner.go:130] ! I1014 15:32:54.862103       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.105857   15224 command_runner.go:130] ! I1014 15:32:54.862232       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.105857   15224 command_runner.go:130] ! I1014 15:32:54.862979       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.105857   15224 command_runner.go:130] ! I1014 15:32:54.863013       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.105857   15224 command_runner.go:130] ! I1014 15:32:54.863219       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.105857   15224 command_runner.go:130] ! I1014 15:32:54.863233       1 main.go:300] handling current node
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:04.864377       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:04.864510       1 main.go:300] handling current node
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:04.864534       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:04.864544       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:04.864795       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:04.864807       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:14.870098       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:14.870279       1 main.go:300] handling current node
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:14.870319       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:14.870394       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:14.872221       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:14.872265       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:24.862168       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:24.862234       1 main.go:300] handling current node
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:24.862290       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:24.862303       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:24.862799       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:24.862950       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:34.870712       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:34.870952       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:34.871749       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:34.871848       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:34.872312       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:34.872409       1 main.go:300] handling current node
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:44.868271       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:44.868442       1 main.go:300] handling current node
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:44.868482       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:44.868509       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:44.869165       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:44.869252       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:54.862162       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:54.862365       1 main.go:300] handling current node
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:54.862404       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106003   15224 command_runner.go:130] ! I1014 15:33:54.862429       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:33:54.862766       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:33:54.862800       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:04.870860       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:04.870993       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:04.871751       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:04.871830       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:04.872365       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:04.872444       1 main.go:300] handling current node
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:14.868274       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:14.868410       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:14.869151       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:14.869244       1 main.go:300] handling current node
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:14.869263       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:14.869271       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:24.869326       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:24.869383       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:24.870365       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:24.870464       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:24.871197       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:24.871235       1 main.go:300] handling current node
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:34.862280       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:34.862387       1 main.go:300] handling current node
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:34.862420       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:34.862440       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:34.862809       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:34.862844       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:44.870611       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:44.870703       1 main.go:300] handling current node
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:44.870732       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:44.870826       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:44.871348       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:44.871437       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:54.862260       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:54.862358       1 main.go:300] handling current node
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:54.862379       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:54.862388       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:54.862782       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:34:54.862862       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:04.871418       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:04.871489       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:04.872322       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:04.872416       1 main.go:300] handling current node
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:04.872437       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:04.872445       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:14.870301       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:14.870413       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:14.870922       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:14.870941       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:14.871055       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:14.871086       1 main.go:300] handling current node
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:24.870776       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:24.870814       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:24.871449       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:24.871682       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:24.872057       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:24.872149       1 main.go:300] handling current node
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:34.871155       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:34.871422       1 main.go:300] handling current node
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:34.876612       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:34.876630       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:34.876808       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:34.876817       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:44.872405       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:44.872450       1 main.go:300] handling current node
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:44.872467       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:44.872473       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:44.873120       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:44.873155       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:54.862113       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:54.862220       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:54.862608       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:54.862725       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:54.862993       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:35:54.863089       1 main.go:300] handling current node
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:36:04.870594       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:36:04.870634       1 main.go:300] handling current node
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:36:04.870705       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.106403   15224 command_runner.go:130] ! I1014 15:36:04.870719       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:04.871246       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:04.871261       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:14.862194       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:14.862337       1 main.go:300] handling current node
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:14.862361       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:14.862370       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:14.863024       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:14.863053       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:24.870839       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:24.871114       1 main.go:300] handling current node
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:24.871303       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:24.871618       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:24.872052       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:24.872164       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:34.870320       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:34.870375       1 main.go:300] handling current node
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:34.870396       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:34.870404       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:34.870774       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:34.870810       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:44.864305       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:44.864530       1 main.go:300] handling current node
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:44.864616       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:44.864683       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:44.865206       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:44.865241       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:54.862701       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:54.862834       1 main.go:300] handling current node
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:54.862940       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:54.863054       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:54.864321       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:36:54.864397       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:04.863761       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:04.863854       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:04.864505       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:04.864638       1 main.go:300] handling current node
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:04.864656       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:04.864664       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:14.866293       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:14.866653       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:14.867034       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:14.867067       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:14.867179       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:14.867247       1 main.go:300] handling current node
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:24.867969       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:24.868019       1 main.go:300] handling current node
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:24.868036       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:24.868043       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:24.868511       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:24.868549       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:34.863786       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:34.864224       1 main.go:300] handling current node
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:34.864384       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:34.864448       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:34.864771       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:34.864865       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:44.871310       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:44.871352       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:44.871803       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:44.871837       1 main.go:300] handling current node
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:44.871852       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:44.871859       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.107433   15224 command_runner.go:130] ! I1014 15:37:54.862573       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:37:54.862694       1 main.go:300] handling current node
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:37:54.862714       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:37:54.862723       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:37:54.863288       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:37:54.863364       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:04.872124       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:04.872285       1 main.go:300] handling current node
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:04.872330       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:04.872343       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:04.873184       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:04.873352       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:14.863654       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:14.863788       1 main.go:300] handling current node
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:14.863812       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:14.863822       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:14.864488       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:14.864585       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:24.868537       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:24.868643       1 main.go:300] handling current node
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:24.868664       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:24.868672       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:24.869258       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:24.869347       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:34.864233       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:34.864469       1 main.go:300] handling current node
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:34.864497       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:34.864509       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:34.865023       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:34.865061       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:44.870754       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:44.870859       1 main.go:300] handling current node
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:44.870919       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:44.870931       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:44.871124       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:44.871155       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:54.862849       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:54.863008       1 main.go:300] handling current node
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:54.863029       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:54.863040       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:54.863313       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:38:54.863343       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:04.861865       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:04.862353       1 main.go:300] handling current node
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:04.862819       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:04.863053       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:04.863648       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:04.865127       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:14.870473       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:14.870526       1 main.go:300] handling current node
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:14.870544       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:14.870551       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:14.871123       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:14.871161       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:24.862264       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:24.862304       1 main.go:300] handling current node
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:24.862323       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:24.862331       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:24.863326       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:24.863417       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:34.862868       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:34.863041       1 main.go:300] handling current node
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:34.863063       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:34.863072       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:34.863370       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:34.863460       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:44.872051       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:44.872175       1 main.go:300] handling current node
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:44.872198       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:44.872392       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:44.873038       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:44.873160       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:54.862953       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:54.862990       1 main.go:300] handling current node
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:54.863013       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:54.863022       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:54.863377       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:39:54.863412       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.108391   15224 command_runner.go:130] ! I1014 15:40:04.864160       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:04.864198       1 main.go:300] handling current node
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:04.864216       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:04.864222       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:04.864390       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:04.864399       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:14.862864       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:14.863081       1 main.go:300] handling current node
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:14.863442       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:14.863496       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:14.864019       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:14.864052       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:24.867383       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:24.867717       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:24.868487       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:24.868619       1 main.go:300] handling current node
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:24.868640       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:24.868650       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:34.866060       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:34.866194       1 main.go:300] handling current node
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:34.866224       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:34.866240       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:34.867632       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:34.867868       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:44.875002       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:44.875336       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:44.875792       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:44.875991       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:44.876302       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:44.876531       1 main.go:300] handling current node
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:54.862504       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:54.862640       1 main.go:300] handling current node
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:54.862766       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:54.862834       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:54.863108       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:40:54.863140       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:04.863181       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:04.863304       1 main.go:300] handling current node
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:04.863326       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:04.863335       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:04.863824       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:04.863963       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:14.868270       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:14.868443       1 main.go:300] handling current node
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:14.868487       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:14.868541       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:14.868808       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:14.868843       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:24.862261       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:24.862508       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:24.863242       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:24.863792       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:24.864172       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:24.864327       1 main.go:300] handling current node
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:34.862294       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:34.862355       1 main.go:300] handling current node
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:34.862377       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:34.862385       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:44.862674       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:44.862799       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:44.863254       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.20.102.29 Flags: [] Table: 0 Realm: 0} 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:44.863509       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:44.863768       1 main.go:300] handling current node
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:44.863945       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:44.864052       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:54.862083       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:54.862208       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:54.862577       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:54.862723       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:54.863005       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:41:54.863097       1 main.go:300] handling current node
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:42:04.870504       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:42:04.871039       1 main.go:300] handling current node
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:42:04.871167       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:42:04.871277       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:42:04.871721       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:42:04.871740       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:42:14.862252       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:42:14.862455       1 main.go:300] handling current node
	I1014 08:47:28.109390   15224 command_runner.go:130] ! I1014 15:42:14.862499       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:14.862521       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:14.863189       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:14.863224       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:24.862819       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:24.863072       1 main.go:300] handling current node
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:24.863093       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:24.863103       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:24.864093       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:24.864136       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:34.863373       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:34.863425       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:34.863670       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:34.863742       1 main.go:300] handling current node
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:34.863763       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:34.863771       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:44.861842       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:44.862176       1 main.go:300] handling current node
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:44.862271       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:44.862357       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:44.862743       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:44.863009       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:54.863140       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:54.863181       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:54.863865       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:54.864051       1 main.go:300] handling current node
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:54.864417       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:42:54.864427       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:04.862539       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:04.862625       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:04.863289       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:04.863395       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:04.863612       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:04.863764       1 main.go:300] handling current node
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:14.871242       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:14.871727       1 main.go:300] handling current node
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:14.871818       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:14.871846       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:14.872085       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:14.872201       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:24.871405       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:24.871540       1 main.go:300] handling current node
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:24.871566       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:24.871575       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:24.871835       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:24.872193       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:34.863042       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:34.863237       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:34.863962       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:34.864059       1 main.go:300] handling current node
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:34.864077       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:34.864085       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:44.871016       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:44.871057       1 main.go:300] handling current node
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:44.871074       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:44.871081       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:44.871299       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.110390   15224 command_runner.go:130] ! I1014 15:43:44.871310       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.127469   15224 logs.go:123] Gathering logs for dmesg ...
	I1014 08:47:28.127469   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 08:47:28.150692   15224 command_runner.go:130] > [Oct14 15:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I1014 08:47:28.150760   15224 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I1014 08:47:28.150760   15224 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I1014 08:47:28.150760   15224 command_runner.go:130] > [  +0.121183] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I1014 08:47:28.150821   15224 command_runner.go:130] > [  +0.024192] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I1014 08:47:28.150846   15224 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +0.058588] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +0.021951] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I1014 08:47:28.150875   15224 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +5.764502] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +0.701221] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +1.823727] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +7.351082] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I1014 08:47:28.150875   15224 command_runner.go:130] > [Oct14 15:45] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +0.175163] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	I1014 08:47:28.150875   15224 command_runner.go:130] > [ +26.061812] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +0.098944] kauditd_printk_skb: 71 callbacks suppressed
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +0.531295] systemd-fstab-generator[1053]: Ignoring "noauto" option for root device
	I1014 08:47:28.150875   15224 command_runner.go:130] > [Oct14 15:46] systemd-fstab-generator[1065]: Ignoring "noauto" option for root device
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +0.229472] systemd-fstab-generator[1079]: Ignoring "noauto" option for root device
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +2.943333] systemd-fstab-generator[1310]: Ignoring "noauto" option for root device
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +0.192845] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +0.209914] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +0.290916] systemd-fstab-generator[1348]: Ignoring "noauto" option for root device
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +0.928050] systemd-fstab-generator[1473]: Ignoring "noauto" option for root device
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +0.103044] kauditd_printk_skb: 202 callbacks suppressed
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +3.884891] systemd-fstab-generator[1614]: Ignoring "noauto" option for root device
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +1.232270] kauditd_printk_skb: 44 callbacks suppressed
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +5.880292] kauditd_printk_skb: 30 callbacks suppressed
	I1014 08:47:28.150875   15224 command_runner.go:130] > [  +4.216972] systemd-fstab-generator[2460]: Ignoring "noauto" option for root device
	I1014 08:47:28.150875   15224 command_runner.go:130] > [ +15.813728] kauditd_printk_skb: 72 callbacks suppressed
	I1014 08:47:28.152624   15224 logs.go:123] Gathering logs for coredns [5d223e2e64fc] ...
	I1014 08:47:28.152624   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d223e2e64fc"
	I1014 08:47:28.182696   15224 command_runner.go:130] > .:53
	I1014 08:47:28.182696   15224 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	I1014 08:47:28.182696   15224 command_runner.go:130] > CoreDNS-1.11.3
	I1014 08:47:28.182844   15224 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I1014 08:47:28.182844   15224 command_runner.go:130] > [INFO] 127.0.0.1:42996 - 9104 "HINFO IN 5434967794797104596.5472118418078127170. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.148386647s
	I1014 08:47:28.183199   15224 logs.go:123] Gathering logs for kindnet [bba035362eb9] ...
	I1014 08:47:28.183199   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bba035362eb9"
	I1014 08:47:28.209334   15224 command_runner.go:130] ! I1014 15:46:18.000845       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1014 08:47:28.210651   15224 command_runner.go:130] ! I1014 15:46:18.015386       1 main.go:139] hostIP = 172.20.106.123
	I1014 08:47:28.211385   15224 command_runner.go:130] ! podIP = 172.20.106.123
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:46:18.015613       1 main.go:148] setting mtu 1500 for CNI 
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:46:18.015630       1 main.go:178] kindnetd IP family: "ipv4"
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:46:18.015641       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:46:18.919987       1 main.go:238] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-37: Error: Could not process rule: Operation not supported
	I1014 08:47:28.211385   15224 command_runner.go:130] ! add table inet kube-network-policies
	I1014 08:47:28.211385   15224 command_runner.go:130] ! ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	I1014 08:47:28.211385   15224 command_runner.go:130] ! , skipping network policies
	I1014 08:47:28.211385   15224 command_runner.go:130] ! W1014 15:46:48.934772       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1014 08:47:28.211385   15224 command_runner.go:130] ! E1014 15:46:48.935157       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError"
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:46:58.925780       1 main.go:296] Handling node with IPs: map[172.20.106.123:{}]
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:46:58.926393       1 main.go:300] handling current node
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:46:58.927562       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:46:58.927665       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:46:58.928645       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.20.109.137 Flags: [] Table: 0 Realm: 0} 
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:46:58.929412       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:46:58.929466       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:46:58.929555       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.20.102.29 Flags: [] Table: 0 Realm: 0} 
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:47:08.930440       1 main.go:296] Handling node with IPs: map[172.20.106.123:{}]
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:47:08.930586       1 main.go:300] handling current node
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:47:08.930648       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:47:08.930739       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:47:08.931080       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:47:08.931268       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:47:18.921538       1 main.go:296] Handling node with IPs: map[172.20.106.123:{}]
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:47:18.921639       1 main.go:300] handling current node
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:47:18.921689       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:47:18.921698       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:47:18.922117       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:28.211385   15224 command_runner.go:130] ! I1014 15:47:18.922190       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:28.214324   15224 logs.go:123] Gathering logs for kubelet ...
	I1014 08:47:28.214324   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:05 multinode-671000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1480]: I1014 15:46:06.037054    1480 server.go:486] "Kubelet version" kubeletVersion="v1.31.1"
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1480]: I1014 15:46:06.037147    1480 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1480]: I1014 15:46:06.038385    1480 server.go:929] "Client rotation is on, will bootstrap in background"
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1480]: E1014 15:46:06.039788    1480 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1540]: I1014 15:46:06.835721    1540 server.go:486] "Kubelet version" kubeletVersion="v1.31.1"
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1540]: I1014 15:46:06.835931    1540 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1540]: I1014 15:46:06.836250    1540 server.go:929] "Client rotation is on, will bootstrap in background"
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1540]: E1014 15:46:06.836463    1540 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:07 multinode-671000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.687712    1622 server.go:486] "Kubelet version" kubeletVersion="v1.31.1"
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.688474    1622 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:28.245343   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.689105    1622 server.go:929] "Client rotation is on, will bootstrap in background"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.691939    1622 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.718455    1622 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.739709    1622 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.739760    1622 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.744155    1622 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.744395    1622 server.go:812] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.744486    1622 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.744668    1622 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.744761    1622 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-671000","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.746378    1622 topology_manager.go:138] "Creating topology manager with none policy"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.746460    1622 container_manager_linux.go:300] "Creating device plugin manager"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.746633    1622 state_mem.go:36] "Initialized new in-memory state store"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.749964    1622 kubelet.go:408] "Attempting to sync node with API server"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.750004    1622 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.750036    1622 kubelet.go:314] "Adding apiserver pod source"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.750844    1622 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.756693    1622 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="docker" version="27.3.1" apiVersion="v1"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.763816    1622 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: W1014 15:46:09.764725    1622 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.766925    1622 server.go:1269] "Started kubelet"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: W1014 15:46:09.767088    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-671000&limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.767172    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-671000&limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: W1014 15:46:09.769189    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.769350    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.769454    1622 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.770134    1622 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.772237    1622 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.20.106.123:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-671000.17fe5c47a6bff791  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-671000,UID:multinode-671000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-671000,},FirstTimestamp:2024-10-14 15:46:09.766881169 +0000 UTC m=+0.158153075,LastTimestamp:2024-10-14 15:46:09.766881169 +0000 UTC m=+0.158153075,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-6
71000,}"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.773096    1622 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.774576    1622 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.777686    1622 server.go:460] "Adding debug handlers to kubelet server"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.780950    1622 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.788697    1622 volume_manager.go:289] "Starting Kubelet Volume Manager"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.789003    1622 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"multinode-671000\" not found"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.789640    1622 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.800447    1622 factory.go:221] Registration of the systemd container factory successfully
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.800536    1622 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.800587    1622 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I1014 08:47:28.246324   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: W1014 15:46:09.811192    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.811498    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.812017    1622 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-671000?timeout=10s\": dial tcp 172.20.106.123:8443: connect: connection refused" interval="200ms"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.863497    1622 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.863530    1622 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.863554    1622 state_mem.go:36] "Initialized new in-memory state store"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.868881    1622 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.868953    1622 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.868995    1622 policy_none.go:49] "None policy: Start"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.872200    1622 reconciler.go:26] "Reconciler: start to sync state"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.877834    1622 memory_manager.go:170] "Starting memorymanager" policy="None"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.877929    1622 state_mem.go:35] "Initializing new in-memory state store"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.878704    1622 state_mem.go:75] "Updated machine memory state"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.884555    1622 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.885687    1622 eviction_manager.go:189] "Eviction manager: starting control loop"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.885828    1622 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.889524    1622 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-671000\" not found"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.892062    1622 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.900012    1622 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.905094    1622 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.906277    1622 status_manager.go:217] "Starting to sync pod status with apiserver"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.906885    1622 kubelet.go:2321] "Starting kubelet main sync loop"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.907458    1622 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: W1014 15:46:09.914061    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.914371    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.933056    1622 iptables.go:577] "Could not set up iptables canary" err=<
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.987581    1622 kubelet_node_status.go:72] "Attempting to register node" node="multinode-671000"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.988812    1622 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.20.106.123:8443: connect: connection refused" node="multinode-671000"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.008458    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f8cc9a218fef4446839b0253827fc2dd89f8eccbd66ab7f0e5123c033f0aacc"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.013887    1622 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-671000?timeout=10s\": dial tcp 172.20.106.123:8443: connect: connection refused" interval="400ms"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.014354    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5733d27d2f1c328dbd19f6392a86e426f344b6f17c65211404fa797e84b69c9"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.014436    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06e529266db4b2cf3ad09285cddeb89d7d11e858b5156cf73f41198c9500b9f2"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.014506    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e48ddcfdf90ad3bfbe621f27c97a331f448947ca77dbd98ab3c9daef2c84e22"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.020161    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dc78387553ff4b78626f5e6aa103a40ec97f42ef49363e27d7d3698cd0df26f"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.035902    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c6be2bd1889b6c5f021362c07c3a88f7f0ff266bb9e8ba4106d666b0f1d267d"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.049024    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1863de70f2316e54fa61ef7c5c6aba94808669b81b1cc811dce745011ee807cb"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.065264    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7144d8ce208cf8c176ad1fc9980a72d450a3d558c4f8f9ee453dea6b22358085"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.079145    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfdde08319e32b93d740933d5ab50829de8f9f3edacce92efe155b4ada4f4212"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.179820    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/864b69f35cb25e9dd5d87a753a055a10-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-671000\" (UID: \"864b69f35cb25e9dd5d87a753a055a10\") " pod="kube-system/kube-apiserver-multinode-671000"
	I1014 08:47:28.247322   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.179915    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b83e31864e0ae98d29d960866012ecb0-k8s-certs\") pod \"kube-controller-manager-multinode-671000\" (UID: \"b83e31864e0ae98d29d960866012ecb0\") " pod="kube-system/kube-controller-manager-multinode-671000"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.179945    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b83e31864e0ae98d29d960866012ecb0-kubeconfig\") pod \"kube-controller-manager-multinode-671000\" (UID: \"b83e31864e0ae98d29d960866012ecb0\") " pod="kube-system/kube-controller-manager-multinode-671000"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.179963    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3e987cfaedc75c39145e8fc131c60c81-kubeconfig\") pod \"kube-scheduler-multinode-671000\" (UID: \"3e987cfaedc75c39145e8fc131c60c81\") " pod="kube-system/kube-scheduler-multinode-671000"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.179984    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/679486a8f27de5805bc2e87fb1920dce-etcd-certs\") pod \"etcd-multinode-671000\" (UID: \"679486a8f27de5805bc2e87fb1920dce\") " pod="kube-system/etcd-multinode-671000"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180012    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/679486a8f27de5805bc2e87fb1920dce-etcd-data\") pod \"etcd-multinode-671000\" (UID: \"679486a8f27de5805bc2e87fb1920dce\") " pod="kube-system/etcd-multinode-671000"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180036    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/864b69f35cb25e9dd5d87a753a055a10-ca-certs\") pod \"kube-apiserver-multinode-671000\" (UID: \"864b69f35cb25e9dd5d87a753a055a10\") " pod="kube-system/kube-apiserver-multinode-671000"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180050    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/864b69f35cb25e9dd5d87a753a055a10-k8s-certs\") pod \"kube-apiserver-multinode-671000\" (UID: \"864b69f35cb25e9dd5d87a753a055a10\") " pod="kube-system/kube-apiserver-multinode-671000"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180068    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b83e31864e0ae98d29d960866012ecb0-ca-certs\") pod \"kube-controller-manager-multinode-671000\" (UID: \"b83e31864e0ae98d29d960866012ecb0\") " pod="kube-system/kube-controller-manager-multinode-671000"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180089    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b83e31864e0ae98d29d960866012ecb0-flexvolume-dir\") pod \"kube-controller-manager-multinode-671000\" (UID: \"b83e31864e0ae98d29d960866012ecb0\") " pod="kube-system/kube-controller-manager-multinode-671000"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180113    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b83e31864e0ae98d29d960866012ecb0-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-671000\" (UID: \"b83e31864e0ae98d29d960866012ecb0\") " pod="kube-system/kube-controller-manager-multinode-671000"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.191857    1622 kubelet_node_status.go:72] "Attempting to register node" node="multinode-671000"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.193195    1622 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.20.106.123:8443: connect: connection refused" node="multinode-671000"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.421148    1622 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-671000?timeout=10s\": dial tcp 172.20.106.123:8443: connect: connection refused" interval="800ms"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.595286    1622 kubelet_node_status.go:72] "Attempting to register node" node="multinode-671000"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.596178    1622 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.20.106.123:8443: connect: connection refused" node="multinode-671000"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: W1014 15:46:10.601172    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.601259    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: W1014 15:46:10.913794    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.913870    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: W1014 15:46:11.078571    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: E1014 15:46:11.078638    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: W1014 15:46:11.151154    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-671000&limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: E1014 15:46:11.151247    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-671000&limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: E1014 15:46:11.223425    1622 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-671000?timeout=10s\": dial tcp 172.20.106.123:8443: connect: connection refused" interval="1.6s"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: I1014 15:46:11.306759    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0697a11790e806fa4d679e3d97be4c7193692d1cb8f76d882cf3a75aa8e0c238"
	I1014 08:47:28.248321   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: I1014 15:46:11.397496    1622 kubelet_node_status.go:72] "Attempting to register node" node="multinode-671000"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: E1014 15:46:11.399409    1622 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.20.106.123:8443: connect: connection refused" node="multinode-671000"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:13 multinode-671000 kubelet[1622]: I1014 15:46:13.001489    1622 kubelet_node_status.go:72] "Attempting to register node" node="multinode-671000"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.316022    1622 kubelet_node_status.go:111] "Node was previously registered" node="multinode-671000"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.316194    1622 kubelet_node_status.go:75] "Successfully registered node" node="multinode-671000"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.316226    1622 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.317405    1622 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.318741    1622 setters.go:600] "Node became not ready" node="multinode-671000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-10-14T15:46:15Z","lastTransitionTime":"2024-10-14T15:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.671751    1622 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-multinode-671000\" already exists" pod="kube-system/etcd-multinode-671000"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.765668    1622 apiserver.go:52] "Watching apiserver"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.771464    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.772813    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.774456    1622 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-671000" podUID="80ea37b8-9db1-4a39-9e9e-51c01edadfb1"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.790436    1622 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.804744    1622 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-671000"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.875635    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8d14473-8859-4015-84e9-d00656cc00c9-xtables-lock\") pod \"kube-proxy-r74dx\" (UID: \"f8d14473-8859-4015-84e9-d00656cc00c9\") " pod="kube-system/kube-proxy-r74dx"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.875831    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a508bbf9-7565-4c73-98cb-e9684985c298-xtables-lock\") pod \"kindnet-wqrx6\" (UID: \"a508bbf9-7565-4c73-98cb-e9684985c298\") " pod="kube-system/kindnet-wqrx6"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.876217    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8d14473-8859-4015-84e9-d00656cc00c9-lib-modules\") pod \"kube-proxy-r74dx\" (UID: \"f8d14473-8859-4015-84e9-d00656cc00c9\") " pod="kube-system/kube-proxy-r74dx"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.876424    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fde8ff75-bc7f-4db4-b098-c3a08b38d205-tmp\") pod \"storage-provisioner\" (UID: \"fde8ff75-bc7f-4db4-b098-c3a08b38d205\") " pod="kube-system/storage-provisioner"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.876537    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a508bbf9-7565-4c73-98cb-e9684985c298-cni-cfg\") pod \"kindnet-wqrx6\" (UID: \"a508bbf9-7565-4c73-98cb-e9684985c298\") " pod="kube-system/kindnet-wqrx6"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.876562    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a508bbf9-7565-4c73-98cb-e9684985c298-lib-modules\") pod \"kindnet-wqrx6\" (UID: \"a508bbf9-7565-4c73-98cb-e9684985c298\") " pod="kube-system/kindnet-wqrx6"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.877886    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.877952    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:16.377930736 +0000 UTC m=+6.769202642 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.896550    1622 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.904462    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.904557    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.904737    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:16.404658149 +0000 UTC m=+6.795930055 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.919872    1622 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cf38ccc62eb74f6e658e1f66ae8cab1" path="/var/lib/kubelet/pods/3cf38ccc62eb74f6e658e1f66ae8cab1/volumes"
	I1014 08:47:28.249350   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.921055    1622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-671000" podStartSLOduration=0.921039556 podStartE2EDuration="921.039556ms" podCreationTimestamp="2024-10-14 15:46:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-14 15:46:15.920643156 +0000 UTC m=+6.311915162" watchObservedRunningTime="2024-10-14 15:46:15.921039556 +0000 UTC m=+6.312311562"
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.921516    1622 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="778fdb620bffec66f911bf24e3c8210b" path="/var/lib/kubelet/pods/778fdb620bffec66f911bf24e3c8210b/volumes"
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.380142    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.380233    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:17.380214172 +0000 UTC m=+7.771486078 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.480798    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.480831    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.480915    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:17.480897019 +0000 UTC m=+7.872168925 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: I1014 15:46:16.655226    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f8bdf552734e59df94d489a081585cdaeeaee729f0a0ae92105117d4d744f3f"
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: I1014 15:46:16.670380    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdcdd532ba1369fe0e866b321f18e0bd99a88a2699c325eea17a070d48f2f024"
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: I1014 15:46:16.981444    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bcadf1f0885ff3411b477a64c6af366c77eb264ce7a761f174b079b11e2e703"
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: I1014 15:46:16.981500    1622 kubelet.go:1895] "Trying to delete pod" pod="kube-system/etcd-multinode-671000" podUID="56dfdf16-1224-41e3-94de-9d7f4021a17d"
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.982831    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: I1014 15:46:17.011276    1622 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-671000"
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.388224    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.388370    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:19.388351245 +0000 UTC m=+9.779623151 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.489591    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.489649    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.489828    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:19.489808492 +0000 UTC m=+9.881080398 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.915482    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:18 multinode-671000 kubelet[1622]: I1014 15:46:18.163696    1622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-671000" podStartSLOduration=1.163677409 podStartE2EDuration="1.163677409s" podCreationTimestamp="2024-10-14 15:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-14 15:46:18.133766095 +0000 UTC m=+8.525038101" watchObservedRunningTime="2024-10-14 15:46:18.163677409 +0000 UTC m=+8.554949415"
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:18 multinode-671000 kubelet[1622]: E1014 15:46:18.908674    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.405477    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.405614    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:23.405594191 +0000 UTC m=+13.796866097 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.506858    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.507035    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.507122    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:23.507105839 +0000 UTC m=+13.898377845 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.931507    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:20 multinode-671000 kubelet[1622]: E1014 15:46:20.907760    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.250319   15224 command_runner.go:130] > Oct 14 15:46:21 multinode-671000 kubelet[1622]: E1014 15:46:21.908994    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:22 multinode-671000 kubelet[1622]: E1014 15:46:22.908657    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.462111    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.462203    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:31.462185592 +0000 UTC m=+21.853457598 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.562508    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.562563    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.562768    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:31.562650785 +0000 UTC m=+21.953922691 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.910119    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:24 multinode-671000 kubelet[1622]: E1014 15:46:24.908917    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:25 multinode-671000 kubelet[1622]: E1014 15:46:25.909505    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:26 multinode-671000 kubelet[1622]: E1014 15:46:26.907750    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:27 multinode-671000 kubelet[1622]: E1014 15:46:27.908822    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:28 multinode-671000 kubelet[1622]: E1014 15:46:28.908219    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:29 multinode-671000 kubelet[1622]: E1014 15:46:29.910218    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:30 multinode-671000 kubelet[1622]: E1014 15:46:30.908259    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.541520    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.541653    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:47.541634578 +0000 UTC m=+37.932906484 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.641930    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.641961    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.642009    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:47.641990935 +0000 UTC m=+38.033262841 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.908383    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:32 multinode-671000 kubelet[1622]: E1014 15:46:32.908527    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:33 multinode-671000 kubelet[1622]: E1014 15:46:33.910838    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:34 multinode-671000 kubelet[1622]: E1014 15:46:34.908180    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:35 multinode-671000 kubelet[1622]: E1014 15:46:35.908574    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:36 multinode-671000 kubelet[1622]: E1014 15:46:36.907722    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:37 multinode-671000 kubelet[1622]: E1014 15:46:37.907861    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:38 multinode-671000 kubelet[1622]: E1014 15:46:38.908728    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.251322   15224 command_runner.go:130] > Oct 14 15:46:39 multinode-671000 kubelet[1622]: E1014 15:46:39.908994    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:40 multinode-671000 kubelet[1622]: E1014 15:46:40.908676    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:41 multinode-671000 kubelet[1622]: E1014 15:46:41.909525    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:42 multinode-671000 kubelet[1622]: E1014 15:46:42.908679    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:43 multinode-671000 kubelet[1622]: E1014 15:46:43.908615    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:44 multinode-671000 kubelet[1622]: E1014 15:46:44.908884    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:45 multinode-671000 kubelet[1622]: E1014 15:46:45.908370    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:46 multinode-671000 kubelet[1622]: E1014 15:46:46.909263    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.573240    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.573353    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:47:19.573334644 +0000 UTC m=+69.964606650 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.673810    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.673907    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.674014    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:47:19.673994259 +0000 UTC m=+70.065266165 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.908883    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:48 multinode-671000 kubelet[1622]: I1014 15:46:48.486803    1622 scope.go:117] "RemoveContainer" containerID="3d8b7bae48a59c755a1ffda14e7fdd0c2302b394db67b7de21fd5b819dad243b"
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:48 multinode-671000 kubelet[1622]: I1014 15:46:48.487259    1622 scope.go:117] "RemoveContainer" containerID="c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6"
	I1014 08:47:28.252321   15224 command_runner.go:130] > Oct 14 15:46:48 multinode-671000 kubelet[1622]: E1014 15:46:48.487448    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(fde8ff75-bc7f-4db4-b098-c3a08b38d205)\"" pod="kube-system/storage-provisioner" podUID="fde8ff75-bc7f-4db4-b098-c3a08b38d205"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:46:48 multinode-671000 kubelet[1622]: E1014 15:46:48.908732    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:46:49 multinode-671000 kubelet[1622]: E1014 15:46:49.908877    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:46:50 multinode-671000 kubelet[1622]: E1014 15:46:50.907718    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:46:51 multinode-671000 kubelet[1622]: E1014 15:46:51.909552    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:46:52 multinode-671000 kubelet[1622]: E1014 15:46:52.908818    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:46:53 multinode-671000 kubelet[1622]: E1014 15:46:53.908389    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:46:54 multinode-671000 kubelet[1622]: E1014 15:46:54.908089    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:46:55 multinode-671000 kubelet[1622]: E1014 15:46:55.908582    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:46:56 multinode-671000 kubelet[1622]: E1014 15:46:56.908839    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:46:57 multinode-671000 kubelet[1622]: E1014 15:46:57.909489    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:46:58 multinode-671000 kubelet[1622]: E1014 15:46:58.908804    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:46:59 multinode-671000 kubelet[1622]: I1014 15:46:59.853068    1622 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:47:02 multinode-671000 kubelet[1622]: I1014 15:47:02.908981    1622 scope.go:117] "RemoveContainer" containerID="c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]: I1014 15:47:09.901385    1622 scope.go:117] "RemoveContainer" containerID="0b5a6e440d7b67606ed0a4dfa4d07715b1fd7e6f53bc0b8779f86a33c5baf6e9"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]: I1014 15:47:09.946936    1622 scope.go:117] "RemoveContainer" containerID="1ba3cd8bbd5963097f4d674fc98eca21e1a710f5a150a067747aa4e6c922d2fe"
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]: E1014 15:47:09.949713    1622 iptables.go:577] "Could not set up iptables canary" err=<
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I1014 08:47:28.253338   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I1014 08:47:28.297321   15224 logs.go:123] Gathering logs for kube-apiserver [a834664fc8b8] ...
	I1014 08:47:28.297321   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a834664fc8b8"
	I1014 08:47:28.323819   15224 command_runner.go:130] ! I1014 15:46:12.133612       1 options.go:228] external host was not specified, using 172.20.106.123
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:12.139596       1 server.go:142] Version: v1.31.1
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:12.140322       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:13.070213       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:13.112422       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:13.116622       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:13.116890       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:13.117611       1 instance.go:232] Using reconciler: lease
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:13.606403       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I1014 08:47:28.324079   15224 command_runner.go:130] ! W1014 15:46:13.606961       1 genericapiserver.go:765] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:13.910757       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:13.911096       1 apis.go:105] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:14.140196       1 apis.go:105] API group "storagemigration.k8s.io" is not enabled, skipping.
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:14.332586       1 apis.go:105] API group "resource.k8s.io" is not enabled, skipping.
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:14.344695       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I1014 08:47:28.324079   15224 command_runner.go:130] ! W1014 15:46:14.344792       1 genericapiserver.go:765] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:28.324079   15224 command_runner.go:130] ! W1014 15:46:14.344802       1 genericapiserver.go:765] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:14.345547       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I1014 08:47:28.324079   15224 command_runner.go:130] ! W1014 15:46:14.345645       1 genericapiserver.go:765] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:14.346729       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I1014 08:47:28.324079   15224 command_runner.go:130] ! I1014 15:46:14.348142       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I1014 08:47:28.324079   15224 command_runner.go:130] ! W1014 15:46:14.348261       1 genericapiserver.go:765] Skipping API autoscaling/v2beta1 because it has no resources.
	I1014 08:47:28.324626   15224 command_runner.go:130] ! W1014 15:46:14.348272       1 genericapiserver.go:765] Skipping API autoscaling/v2beta2 because it has no resources.
	I1014 08:47:28.324626   15224 command_runner.go:130] ! I1014 15:46:14.350632       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I1014 08:47:28.324626   15224 command_runner.go:130] ! W1014 15:46:14.350741       1 genericapiserver.go:765] Skipping API batch/v1beta1 because it has no resources.
	I1014 08:47:28.324626   15224 command_runner.go:130] ! I1014 15:46:14.352378       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I1014 08:47:28.324707   15224 command_runner.go:130] ! W1014 15:46:14.352489       1 genericapiserver.go:765] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:28.324707   15224 command_runner.go:130] ! W1014 15:46:14.352501       1 genericapiserver.go:765] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:28.324707   15224 command_runner.go:130] ! I1014 15:46:14.353674       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I1014 08:47:28.324707   15224 command_runner.go:130] ! W1014 15:46:14.353813       1 genericapiserver.go:765] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:28.324707   15224 command_runner.go:130] ! W1014 15:46:14.353843       1 genericapiserver.go:765] Skipping API coordination.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:28.324777   15224 command_runner.go:130] ! I1014 15:46:14.355117       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I1014 08:47:28.324816   15224 command_runner.go:130] ! W1014 15:46:14.355256       1 genericapiserver.go:765] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:28.324816   15224 command_runner.go:130] ! I1014 15:46:14.358401       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I1014 08:47:28.324857   15224 command_runner.go:130] ! W1014 15:46:14.358517       1 genericapiserver.go:765] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:28.324857   15224 command_runner.go:130] ! W1014 15:46:14.358528       1 genericapiserver.go:765] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:28.324857   15224 command_runner.go:130] ! I1014 15:46:14.359534       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I1014 08:47:28.324911   15224 command_runner.go:130] ! W1014 15:46:14.359632       1 genericapiserver.go:765] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:28.324911   15224 command_runner.go:130] ! W1014 15:46:14.359643       1 genericapiserver.go:765] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:28.324947   15224 command_runner.go:130] ! I1014 15:46:14.360836       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I1014 08:47:28.324972   15224 command_runner.go:130] ! W1014 15:46:14.360942       1 genericapiserver.go:765] Skipping API policy/v1beta1 because it has no resources.
	I1014 08:47:28.324972   15224 command_runner.go:130] ! I1014 15:46:14.363702       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I1014 08:47:28.325035   15224 command_runner.go:130] ! W1014 15:46:14.363848       1 genericapiserver.go:765] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:28.325035   15224 command_runner.go:130] ! W1014 15:46:14.363860       1 genericapiserver.go:765] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:14.364685       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I1014 08:47:28.325082   15224 command_runner.go:130] ! W1014 15:46:14.364801       1 genericapiserver.go:765] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:28.325082   15224 command_runner.go:130] ! W1014 15:46:14.364812       1 genericapiserver.go:765] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:14.368101       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I1014 08:47:28.325082   15224 command_runner.go:130] ! W1014 15:46:14.368216       1 genericapiserver.go:765] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:28.325082   15224 command_runner.go:130] ! W1014 15:46:14.368228       1 genericapiserver.go:765] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:14.370008       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:14.371702       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I1014 08:47:28.325082   15224 command_runner.go:130] ! W1014 15:46:14.371808       1 genericapiserver.go:765] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I1014 08:47:28.325082   15224 command_runner.go:130] ! W1014 15:46:14.371818       1 genericapiserver.go:765] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:14.376771       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I1014 08:47:28.325082   15224 command_runner.go:130] ! W1014 15:46:14.376868       1 genericapiserver.go:765] Skipping API apps/v1beta2 because it has no resources.
	I1014 08:47:28.325082   15224 command_runner.go:130] ! W1014 15:46:14.376877       1 genericapiserver.go:765] Skipping API apps/v1beta1 because it has no resources.
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:14.379998       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I1014 08:47:28.325082   15224 command_runner.go:130] ! W1014 15:46:14.380101       1 genericapiserver.go:765] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:28.325082   15224 command_runner.go:130] ! W1014 15:46:14.380112       1 genericapiserver.go:765] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:14.380956       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I1014 08:47:28.325082   15224 command_runner.go:130] ! W1014 15:46:14.381059       1 genericapiserver.go:765] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:14.395072       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I1014 08:47:28.325082   15224 command_runner.go:130] ! W1014 15:46:14.395116       1 genericapiserver.go:765] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:15.014537       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:15.014702       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:15.016123       1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:15.016823       1 secure_serving.go:213] Serving securely on [::]:8443
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:15.017426       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:15.018450       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:15.018766       1 remote_available_controller.go:411] Starting RemoteAvailability controller
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:15.018850       1 cache.go:32] Waiting for caches to sync for RemoteAvailability controller
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:15.018985       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I1014 08:47:28.325082   15224 command_runner.go:130] ! I1014 15:46:15.021391       1 controller.go:119] Starting legacy_token_tracking_controller
	I1014 08:47:28.325620   15224 command_runner.go:130] ! I1014 15:46:15.021471       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I1014 08:47:28.325620   15224 command_runner.go:130] ! I1014 15:46:15.021517       1 aggregator.go:169] waiting for initial CRD sync...
	I1014 08:47:28.325684   15224 command_runner.go:130] ! I1014 15:46:15.022050       1 dynamic_serving_content.go:135] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1014 08:47:28.325684   15224 command_runner.go:130] ! I1014 15:46:15.022573       1 cluster_authentication_trust_controller.go:443] Starting cluster_authentication_trust_controller controller
	I1014 08:47:28.325756   15224 command_runner.go:130] ! I1014 15:46:15.022688       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I1014 08:47:28.325756   15224 command_runner.go:130] ! I1014 15:46:15.022775       1 system_namespaces_controller.go:66] Starting system namespaces controller
	I1014 08:47:28.325787   15224 command_runner.go:130] ! I1014 15:46:15.026778       1 apf_controller.go:377] Starting API Priority and Fairness config controller
	I1014 08:47:28.325787   15224 command_runner.go:130] ! I1014 15:46:15.027043       1 controller.go:78] Starting OpenAPI AggregationController
	I1014 08:47:28.325787   15224 command_runner.go:130] ! I1014 15:46:15.027942       1 customresource_discovery_controller.go:292] Starting DiscoveryController
	I1014 08:47:28.325787   15224 command_runner.go:130] ! I1014 15:46:15.029402       1 local_available_controller.go:156] Starting LocalAvailability controller
	I1014 08:47:28.325787   15224 command_runner.go:130] ! I1014 15:46:15.029447       1 cache.go:32] Waiting for caches to sync for LocalAvailability controller
	I1014 08:47:28.325787   15224 command_runner.go:130] ! I1014 15:46:15.029815       1 apiservice_controller.go:100] Starting APIServiceRegistrationController
	I1014 08:47:28.325787   15224 command_runner.go:130] ! I1014 15:46:15.029850       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I1014 08:47:28.325862   15224 command_runner.go:130] ! I1014 15:46:15.034040       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I1014 08:47:28.325862   15224 command_runner.go:130] ! I1014 15:46:15.034136       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I1014 08:47:28.325862   15224 command_runner.go:130] ! I1014 15:46:15.034690       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 08:47:28.325862   15224 command_runner.go:130] ! I1014 15:46:15.034946       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1014 08:47:28.325862   15224 command_runner.go:130] ! I1014 15:46:15.082229       1 controller.go:142] Starting OpenAPI controller
	I1014 08:47:28.325940   15224 command_runner.go:130] ! I1014 15:46:15.083838       1 controller.go:90] Starting OpenAPI V3 controller
	I1014 08:47:28.325940   15224 command_runner.go:130] ! I1014 15:46:15.083894       1 naming_controller.go:294] Starting NamingConditionController
	I1014 08:47:28.325940   15224 command_runner.go:130] ! I1014 15:46:15.086443       1 establishing_controller.go:81] Starting EstablishingController
	I1014 08:47:28.325940   15224 command_runner.go:130] ! I1014 15:46:15.087455       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I1014 08:47:28.325940   15224 command_runner.go:130] ! I1014 15:46:15.088333       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1014 08:47:28.325940   15224 command_runner.go:130] ! I1014 15:46:15.092677       1 crd_finalizer.go:269] Starting CRDFinalizer
	I1014 08:47:28.325940   15224 command_runner.go:130] ! I1014 15:46:15.212597       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1014 08:47:28.325940   15224 command_runner.go:130] ! I1014 15:46:15.212691       1 policy_source.go:224] refreshing policies
	I1014 08:47:28.326042   15224 command_runner.go:130] ! I1014 15:46:15.221529       1 shared_informer.go:320] Caches are synced for configmaps
	I1014 08:47:28.326042   15224 command_runner.go:130] ! I1014 15:46:15.226910       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1014 08:47:28.326042   15224 command_runner.go:130] ! I1014 15:46:15.227013       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1014 08:47:28.326042   15224 command_runner.go:130] ! I1014 15:46:15.229937       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1014 08:47:28.326042   15224 command_runner.go:130] ! I1014 15:46:15.231898       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 08:47:28.326122   15224 command_runner.go:130] ! I1014 15:46:15.233234       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1014 08:47:28.326122   15224 command_runner.go:130] ! I1014 15:46:15.234375       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1014 08:47:28.326122   15224 command_runner.go:130] ! I1014 15:46:15.235151       1 aggregator.go:171] initial CRD sync complete...
	I1014 08:47:28.326228   15224 command_runner.go:130] ! I1014 15:46:15.235400       1 autoregister_controller.go:144] Starting autoregister controller
	I1014 08:47:28.326256   15224 command_runner.go:130] ! I1014 15:46:15.235712       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1014 08:47:28.326288   15224 command_runner.go:130] ! I1014 15:46:15.235936       1 cache.go:39] Caches are synced for autoregister controller
	I1014 08:47:28.326288   15224 command_runner.go:130] ! I1014 15:46:15.255261       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1014 08:47:28.326288   15224 command_runner.go:130] ! I1014 15:46:15.256039       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1014 08:47:28.326288   15224 command_runner.go:130] ! I1014 15:46:15.271561       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1014 08:47:28.326288   15224 command_runner.go:130] ! I1014 15:46:15.319091       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1014 08:47:28.326288   15224 command_runner.go:130] ! I1014 15:46:16.036564       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 08:47:28.326288   15224 command_runner.go:130] ! W1014 15:46:16.558489       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.20.100.167 172.20.106.123]
	I1014 08:47:28.326288   15224 command_runner.go:130] ! I1014 15:46:16.560272       1 controller.go:615] quota admission added evaluator for: endpoints
	I1014 08:47:28.326288   15224 command_runner.go:130] ! I1014 15:46:16.573015       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 08:47:28.326288   15224 command_runner.go:130] ! I1014 15:46:18.229365       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1014 08:47:28.326288   15224 command_runner.go:130] ! I1014 15:46:18.748102       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1014 08:47:28.326288   15224 command_runner.go:130] ! I1014 15:46:18.793266       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1014 08:47:28.326288   15224 command_runner.go:130] ! I1014 15:46:18.985788       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 08:47:28.326288   15224 command_runner.go:130] ! I1014 15:46:19.024530       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 08:47:28.326288   15224 command_runner.go:130] ! W1014 15:46:36.563040       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.20.106.123]
	I1014 08:47:28.336942   15224 logs.go:123] Gathering logs for kube-scheduler [661e75bbf6b4] ...
	I1014 08:47:28.336942   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 661e75bbf6b4"
	I1014 08:47:28.369939   15224 command_runner.go:130] ! I1014 15:22:34.688194       1 serving.go:386] Generated self-signed cert in-memory
	I1014 08:47:28.370673   15224 command_runner.go:130] ! W1014 15:22:36.199586       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I1014 08:47:28.370788   15224 command_runner.go:130] ! W1014 15:22:36.199661       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I1014 08:47:28.370788   15224 command_runner.go:130] ! W1014 15:22:36.199675       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	I1014 08:47:28.370788   15224 command_runner.go:130] ! W1014 15:22:36.199681       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 08:47:28.371646   15224 command_runner.go:130] ! I1014 15:22:36.288536       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1014 08:47:28.371646   15224 command_runner.go:130] ! I1014 15:22:36.288649       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:28.371767   15224 command_runner.go:130] ! I1014 15:22:36.292628       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1014 08:47:28.371767   15224 command_runner.go:130] ! I1014 15:22:36.292942       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 08:47:28.371767   15224 command_runner.go:130] ! I1014 15:22:36.293038       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 08:47:28.371767   15224 command_runner.go:130] ! I1014 15:22:36.293102       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 08:47:28.371845   15224 command_runner.go:130] ! W1014 15:22:36.298034       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1014 08:47:28.371896   15224 command_runner.go:130] ! E1014 15:22:36.298090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.371953   15224 command_runner.go:130] ! W1014 15:22:36.298377       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:28.372052   15224 command_runner.go:130] ! E1014 15:22:36.298420       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.372152   15224 command_runner.go:130] ! W1014 15:22:36.298587       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I1014 08:47:28.372181   15224 command_runner.go:130] ! E1014 15:22:36.298642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.372181   15224 command_runner.go:130] ! W1014 15:22:36.298730       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1014 08:47:28.372181   15224 command_runner.go:130] ! E1014 15:22:36.298855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.372181   15224 command_runner.go:130] ! W1014 15:22:36.299272       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1014 08:47:28.372181   15224 command_runner.go:130] ! E1014 15:22:36.299314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.372181   15224 command_runner.go:130] ! W1014 15:22:36.299416       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1014 08:47:28.372181   15224 command_runner.go:130] ! E1014 15:22:36.299618       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.372181   15224 command_runner.go:130] ! W1014 15:22:36.299693       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I1014 08:47:28.372181   15224 command_runner.go:130] ! E1014 15:22:36.299710       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.372181   15224 command_runner.go:130] ! W1014 15:22:36.299857       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1014 08:47:28.372181   15224 command_runner.go:130] ! E1014 15:22:36.299920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.372181   15224 command_runner.go:130] ! W1014 15:22:36.302822       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1014 08:47:28.372181   15224 command_runner.go:130] ! E1014 15:22:36.303096       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1014 08:47:28.372181   15224 command_runner.go:130] ! W1014 15:22:36.303242       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1014 08:47:28.372181   15224 command_runner.go:130] ! E1014 15:22:36.303288       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.372181   15224 command_runner.go:130] ! W1014 15:22:36.303391       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1014 08:47:28.372940   15224 command_runner.go:130] ! E1014 15:22:36.303426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.372940   15224 command_runner.go:130] ! W1014 15:22:36.303605       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:28.372940   15224 command_runner.go:130] ! E1014 15:22:36.303643       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.372940   15224 command_runner.go:130] ! W1014 15:22:36.303739       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:28.372940   15224 command_runner.go:130] ! E1014 15:22:36.303771       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.372940   15224 command_runner.go:130] ! W1014 15:22:36.303825       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1014 08:47:28.372940   15224 command_runner.go:130] ! E1014 15:22:36.303860       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.372940   15224 command_runner.go:130] ! W1014 15:22:36.304041       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:28.372940   15224 command_runner.go:130] ! E1014 15:22:36.304079       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.372940   15224 command_runner.go:130] ! W1014 15:22:37.145637       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:28.372940   15224 command_runner.go:130] ! E1014 15:22:37.146051       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.372940   15224 command_runner.go:130] ! W1014 15:22:37.146415       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1014 08:47:28.372940   15224 command_runner.go:130] ! E1014 15:22:37.146705       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.372940   15224 command_runner.go:130] ! W1014 15:22:37.189116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1014 08:47:28.373579   15224 command_runner.go:130] ! E1014 15:22:37.189252       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.373579   15224 command_runner.go:130] ! W1014 15:22:37.205810       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:28.373579   15224 command_runner.go:130] ! E1014 15:22:37.206152       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.373579   15224 command_runner.go:130] ! W1014 15:22:37.269786       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1014 08:47:28.373579   15224 command_runner.go:130] ! E1014 15:22:37.269856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.373579   15224 command_runner.go:130] ! W1014 15:22:37.270258       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:28.373579   15224 command_runner.go:130] ! E1014 15:22:37.270286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.373579   15224 command_runner.go:130] ! W1014 15:22:37.290857       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1014 08:47:28.373579   15224 command_runner.go:130] ! E1014 15:22:37.290997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.373579   15224 command_runner.go:130] ! W1014 15:22:37.304519       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1014 08:47:28.373579   15224 command_runner.go:130] ! E1014 15:22:37.305020       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1014 08:47:28.373579   15224 command_runner.go:130] ! W1014 15:22:37.328746       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I1014 08:47:28.373579   15224 command_runner.go:130] ! E1014 15:22:37.329049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.374141   15224 command_runner.go:130] ! W1014 15:22:37.338059       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I1014 08:47:28.374224   15224 command_runner.go:130] ! E1014 15:22:37.338269       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.374321   15224 command_runner.go:130] ! W1014 15:22:37.394728       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1014 08:47:28.374349   15224 command_runner.go:130] ! E1014 15:22:37.394803       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.374382   15224 command_runner.go:130] ! W1014 15:22:37.445455       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1014 08:47:28.374382   15224 command_runner.go:130] ! E1014 15:22:37.445612       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.374382   15224 command_runner.go:130] ! W1014 15:22:37.490285       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1014 08:47:28.374382   15224 command_runner.go:130] ! E1014 15:22:37.490817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.374382   15224 command_runner.go:130] ! W1014 15:22:37.537304       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1014 08:47:28.374382   15224 command_runner.go:130] ! E1014 15:22:37.537396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.374382   15224 command_runner.go:130] ! W1014 15:22:37.713713       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:28.374382   15224 command_runner.go:130] ! E1014 15:22:37.713777       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:28.374382   15224 command_runner.go:130] ! I1014 15:22:39.593596       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 08:47:28.374382   15224 command_runner.go:130] ! I1014 15:43:46.388691       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1014 08:47:28.374382   15224 command_runner.go:130] ! I1014 15:43:46.388783       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1014 08:47:28.374382   15224 command_runner.go:130] ! I1014 15:43:46.389141       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 08:47:28.374382   15224 command_runner.go:130] ! E1014 15:43:46.389549       1 run.go:72] "command failed" err="finished without leader elect"
	I1014 08:47:28.389055   15224 logs.go:123] Gathering logs for container status ...
	I1014 08:47:28.389055   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 08:47:28.450064   15224 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I1014 08:47:28.450064   15224 command_runner.go:130] > 1adddc667bd90       8c811b4aec35f                                                                                         8 seconds ago        Running             busybox                   1                   9092d17516eb3       busybox-7dff88458-vlp7j
	I1014 08:47:28.450064   15224 command_runner.go:130] > 5d223e2e64fcd       c69fa2e9cbf5f                                                                                         8 seconds ago        Running             coredns                   1                   429b989a1a986       coredns-7c65d6cfc9-fs9ct
	I1014 08:47:28.450064   15224 command_runner.go:130] > 9d526b02ee41c       6e38f40d628db                                                                                         26 seconds ago       Running             storage-provisioner       2                   cdcdd532ba136       storage-provisioner
	I1014 08:47:28.450064   15224 command_runner.go:130] > bba035362eb97       3a5bc24055c9e                                                                                         About a minute ago   Running             kindnet-cni               1                   7bcadf1f0885f       kindnet-wqrx6
	I1014 08:47:28.450064   15224 command_runner.go:130] > c76c258568107       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   cdcdd532ba136       storage-provisioner
	I1014 08:47:28.450064   15224 command_runner.go:130] > e83db276dec37       60c005f310ff3                                                                                         About a minute ago   Running             kube-proxy                1                   6f8bdf552734e       kube-proxy-r74dx
	I1014 08:47:28.450064   15224 command_runner.go:130] > 48c8492e231e1       2e96e5913fc06                                                                                         About a minute ago   Running             etcd                      0                   0697a11790e80       etcd-multinode-671000
	I1014 08:47:28.450064   15224 command_runner.go:130] > 8af48c446f7e1       175ffd71cce3d                                                                                         About a minute ago   Running             kube-controller-manager   1                   7bd4c36606eef       kube-controller-manager-multinode-671000
	I1014 08:47:28.450064   15224 command_runner.go:130] > a834664fc8b80       6bab7719df100                                                                                         About a minute ago   Running             kube-apiserver            0                   6155e8be2d5d7       kube-apiserver-multinode-671000
	I1014 08:47:28.450064   15224 command_runner.go:130] > d428685276e1e       9aa1fad941575                                                                                         About a minute ago   Running             kube-scheduler            1                   1d3033f871fb1       kube-scheduler-multinode-671000
	I1014 08:47:28.450064   15224 command_runner.go:130] > cbf0b40e378b9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   06e529266db4b       busybox-7dff88458-vlp7j
	I1014 08:47:28.450064   15224 command_runner.go:130] > d9831e9f8ce8c       c69fa2e9cbf5f                                                                                         24 minutes ago       Exited              coredns                   0                   2f8cc9a218fef       coredns-7c65d6cfc9-fs9ct
	I1014 08:47:28.450064   15224 command_runner.go:130] > fcdf89a3ac8ce       kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387              24 minutes ago       Exited              kindnet-cni               0                   5e48ddcfdf90a       kindnet-wqrx6
	I1014 08:47:28.450064   15224 command_runner.go:130] > ea19428d70363       60c005f310ff3                                                                                         24 minutes ago       Exited              kube-proxy                0                   7144d8ce208cf       kube-proxy-r74dx
	I1014 08:47:28.450064   15224 command_runner.go:130] > 661e75bbf6b46       9aa1fad941575                                                                                         24 minutes ago       Exited              kube-scheduler            0                   2dc78387553ff       kube-scheduler-multinode-671000
	I1014 08:47:28.450064   15224 command_runner.go:130] > 712aad669c9f6       175ffd71cce3d                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   bfdde08319e32       kube-controller-manager-multinode-671000
	I1014 08:47:28.453051   15224 logs.go:123] Gathering logs for describe nodes ...
	I1014 08:47:28.453051   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 08:47:28.601782   15224 command_runner.go:130] > Name:               multinode-671000
	I1014 08:47:28.601867   15224 command_runner.go:130] > Roles:              control-plane
	I1014 08:47:28.601867   15224 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I1014 08:47:28.601867   15224 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I1014 08:47:28.601867   15224 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I1014 08:47:28.601969   15224 command_runner.go:130] >                     kubernetes.io/hostname=multinode-671000
	I1014 08:47:28.601969   15224 command_runner.go:130] >                     kubernetes.io/os=linux
	I1014 08:47:28.601969   15224 command_runner.go:130] >                     minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	I1014 08:47:28.601969   15224 command_runner.go:130] >                     minikube.k8s.io/name=multinode-671000
	I1014 08:47:28.601969   15224 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I1014 08:47:28.601969   15224 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_10_14T08_22_41_0700
	I1014 08:47:28.601969   15224 command_runner.go:130] >                     minikube.k8s.io/version=v1.34.0
	I1014 08:47:28.602042   15224 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I1014 08:47:28.602042   15224 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I1014 08:47:28.602042   15224 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I1014 08:47:28.602042   15224 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I1014 08:47:28.602042   15224 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I1014 08:47:28.602108   15224 command_runner.go:130] > CreationTimestamp:  Mon, 14 Oct 2024 15:22:36 +0000
	I1014 08:47:28.602137   15224 command_runner.go:130] > Taints:             <none>
	I1014 08:47:28.602137   15224 command_runner.go:130] > Unschedulable:      false
	I1014 08:47:28.602137   15224 command_runner.go:130] > Lease:
	I1014 08:47:28.602171   15224 command_runner.go:130] >   HolderIdentity:  multinode-671000
	I1014 08:47:28.602188   15224 command_runner.go:130] >   AcquireTime:     <unset>
	I1014 08:47:28.602219   15224 command_runner.go:130] >   RenewTime:       Mon, 14 Oct 2024 15:47:26 +0000
	I1014 08:47:28.602219   15224 command_runner.go:130] > Conditions:
	I1014 08:47:28.602267   15224 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I1014 08:47:28.602313   15224 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I1014 08:47:28.602313   15224 command_runner.go:130] >   MemoryPressure   False   Mon, 14 Oct 2024 15:46:59 +0000   Mon, 14 Oct 2024 15:22:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I1014 08:47:28.602357   15224 command_runner.go:130] >   DiskPressure     False   Mon, 14 Oct 2024 15:46:59 +0000   Mon, 14 Oct 2024 15:22:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I1014 08:47:28.602357   15224 command_runner.go:130] >   PIDPressure      False   Mon, 14 Oct 2024 15:46:59 +0000   Mon, 14 Oct 2024 15:22:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I1014 08:47:28.602433   15224 command_runner.go:130] >   Ready            True    Mon, 14 Oct 2024 15:46:59 +0000   Mon, 14 Oct 2024 15:46:59 +0000   KubeletReady                 kubelet is posting ready status
	I1014 08:47:28.602433   15224 command_runner.go:130] > Addresses:
	I1014 08:47:28.602433   15224 command_runner.go:130] >   InternalIP:  172.20.106.123
	I1014 08:47:28.602433   15224 command_runner.go:130] >   Hostname:    multinode-671000
	I1014 08:47:28.602433   15224 command_runner.go:130] > Capacity:
	I1014 08:47:28.602433   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:28.602433   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:28.602433   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:28.602433   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:28.602433   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:28.602433   15224 command_runner.go:130] > Allocatable:
	I1014 08:47:28.602433   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:28.602433   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:28.602433   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:28.602433   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:28.602433   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:28.602433   15224 command_runner.go:130] > System Info:
	I1014 08:47:28.602433   15224 command_runner.go:130] >   Machine ID:                 fc389f3b9e2846b4b909cfc8e7984541
	I1014 08:47:28.602433   15224 command_runner.go:130] >   System UUID:                f72bc210-2ad8-1e4f-ad7d-3e75159c3d98
	I1014 08:47:28.602433   15224 command_runner.go:130] >   Boot ID:                    98d09a99-1eff-402d-837f-6cacdc4463d7
	I1014 08:47:28.602433   15224 command_runner.go:130] >   Kernel Version:             5.10.207
	I1014 08:47:28.602433   15224 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I1014 08:47:28.602433   15224 command_runner.go:130] >   Operating System:           linux
	I1014 08:47:28.602433   15224 command_runner.go:130] >   Architecture:               amd64
	I1014 08:47:28.602433   15224 command_runner.go:130] >   Container Runtime Version:  docker://27.3.1
	I1014 08:47:28.602433   15224 command_runner.go:130] >   Kubelet Version:            v1.31.1
	I1014 08:47:28.602433   15224 command_runner.go:130] >   Kube-Proxy Version:         v1.31.1
	I1014 08:47:28.602433   15224 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I1014 08:47:28.602433   15224 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I1014 08:47:28.602433   15224 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I1014 08:47:28.602433   15224 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I1014 08:47:28.602433   15224 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I1014 08:47:28.602433   15224 command_runner.go:130] >   default                     busybox-7dff88458-vlp7j                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I1014 08:47:28.602433   15224 command_runner.go:130] >   kube-system                 coredns-7c65d6cfc9-fs9ct                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     24m
	I1014 08:47:28.602433   15224 command_runner.go:130] >   kube-system                 etcd-multinode-671000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         71s
	I1014 08:47:28.602433   15224 command_runner.go:130] >   kube-system                 kindnet-wqrx6                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	I1014 08:47:28.602433   15224 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-671000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         73s
	I1014 08:47:28.602433   15224 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-671000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I1014 08:47:28.602433   15224 command_runner.go:130] >   kube-system                 kube-proxy-r74dx                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I1014 08:47:28.602433   15224 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-671000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I1014 08:47:28.602433   15224 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I1014 08:47:28.602972   15224 command_runner.go:130] > Allocated resources:
	I1014 08:47:28.602972   15224 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I1014 08:47:28.602972   15224 command_runner.go:130] >   Resource           Requests     Limits
	I1014 08:47:28.602972   15224 command_runner.go:130] >   --------           --------     ------
	I1014 08:47:28.602972   15224 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I1014 08:47:28.603092   15224 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I1014 08:47:28.603092   15224 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I1014 08:47:28.603092   15224 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I1014 08:47:28.603092   15224 command_runner.go:130] > Events:
	I1014 08:47:28.603092   15224 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I1014 08:47:28.603092   15224 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I1014 08:47:28.603154   15224 command_runner.go:130] >   Normal  Starting                 24m                kube-proxy       
	I1014 08:47:28.603154   15224 command_runner.go:130] >   Normal  Starting                 70s                kube-proxy       
	I1014 08:47:28.603154   15224 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I1014 08:47:28.603154   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node multinode-671000 status is now: NodeHasSufficientMemory
	I1014 08:47:28.603218   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node multinode-671000 status is now: NodeHasNoDiskPressure
	I1014 08:47:28.603242   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node multinode-671000 status is now: NodeHasSufficientPID
	I1014 08:47:28.603271   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:28.603271   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-671000 status is now: NodeHasNoDiskPressure
	I1014 08:47:28.603271   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:28.603271   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-671000 status is now: NodeHasSufficientMemory
	I1014 08:47:28.603271   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-671000 status is now: NodeHasSufficientPID
	I1014 08:47:28.603271   15224 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I1014 08:47:28.603271   15224 command_runner.go:130] >   Normal  RegisteredNode           24m                node-controller  Node multinode-671000 event: Registered Node multinode-671000 in Controller
	I1014 08:47:28.603271   15224 command_runner.go:130] >   Normal  NodeReady                24m                kubelet          Node multinode-671000 status is now: NodeReady
	I1014 08:47:28.603271   15224 command_runner.go:130] >   Normal  Starting                 79s                kubelet          Starting kubelet.
	I1014 08:47:28.603271   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:28.603271   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  78s (x8 over 79s)  kubelet          Node multinode-671000 status is now: NodeHasSufficientMemory
	I1014 08:47:28.603271   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    78s (x8 over 79s)  kubelet          Node multinode-671000 status is now: NodeHasNoDiskPressure
	I1014 08:47:28.603271   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     78s (x7 over 79s)  kubelet          Node multinode-671000 status is now: NodeHasSufficientPID
	I1014 08:47:28.603271   15224 command_runner.go:130] >   Normal  RegisteredNode           70s                node-controller  Node multinode-671000 event: Registered Node multinode-671000 in Controller
	I1014 08:47:28.626872   15224 command_runner.go:130] > Name:               multinode-671000-m02
	I1014 08:47:28.626872   15224 command_runner.go:130] > Roles:              <none>
	I1014 08:47:28.626872   15224 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I1014 08:47:28.626872   15224 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I1014 08:47:28.626872   15224 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I1014 08:47:28.626872   15224 command_runner.go:130] >                     kubernetes.io/hostname=multinode-671000-m02
	I1014 08:47:28.626872   15224 command_runner.go:130] >                     kubernetes.io/os=linux
	I1014 08:47:28.626872   15224 command_runner.go:130] >                     minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	I1014 08:47:28.626872   15224 command_runner.go:130] >                     minikube.k8s.io/name=multinode-671000
	I1014 08:47:28.626872   15224 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I1014 08:47:28.626872   15224 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_10_14T08_25_50_0700
	I1014 08:47:28.626872   15224 command_runner.go:130] >                     minikube.k8s.io/version=v1.34.0
	I1014 08:47:28.627902   15224 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I1014 08:47:28.627902   15224 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I1014 08:47:28.627902   15224 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I1014 08:47:28.627902   15224 command_runner.go:130] > CreationTimestamp:  Mon, 14 Oct 2024 15:25:49 +0000
	I1014 08:47:28.627902   15224 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I1014 08:47:28.627902   15224 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I1014 08:47:28.627902   15224 command_runner.go:130] > Unschedulable:      false
	I1014 08:47:28.627902   15224 command_runner.go:130] > Lease:
	I1014 08:47:28.627902   15224 command_runner.go:130] >   HolderIdentity:  multinode-671000-m02
	I1014 08:47:28.627902   15224 command_runner.go:130] >   AcquireTime:     <unset>
	I1014 08:47:28.627902   15224 command_runner.go:130] >   RenewTime:       Mon, 14 Oct 2024 15:43:00 +0000
	I1014 08:47:28.627902   15224 command_runner.go:130] > Conditions:
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I1014 08:47:28.627902   15224 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I1014 08:47:28.627902   15224 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 14 Oct 2024 15:42:08 +0000   Mon, 14 Oct 2024 15:43:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:28.627902   15224 command_runner.go:130] >   DiskPressure     Unknown   Mon, 14 Oct 2024 15:42:08 +0000   Mon, 14 Oct 2024 15:43:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:28.627902   15224 command_runner.go:130] >   PIDPressure      Unknown   Mon, 14 Oct 2024 15:42:08 +0000   Mon, 14 Oct 2024 15:43:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Ready            Unknown   Mon, 14 Oct 2024 15:42:08 +0000   Mon, 14 Oct 2024 15:43:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:28.627902   15224 command_runner.go:130] > Addresses:
	I1014 08:47:28.627902   15224 command_runner.go:130] >   InternalIP:  172.20.109.137
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Hostname:    multinode-671000-m02
	I1014 08:47:28.627902   15224 command_runner.go:130] > Capacity:
	I1014 08:47:28.627902   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:28.627902   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:28.627902   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:28.627902   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:28.627902   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:28.627902   15224 command_runner.go:130] > Allocatable:
	I1014 08:47:28.627902   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:28.627902   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:28.627902   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:28.627902   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:28.627902   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:28.627902   15224 command_runner.go:130] > System Info:
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Machine ID:                 d57a3470ff3c4f03abf025a19c5c23d9
	I1014 08:47:28.627902   15224 command_runner.go:130] >   System UUID:                d36e677f-0d87-a94f-af28-d8324326f88f
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Boot ID:                    33649cab-156e-4789-8adf-a6fc060b3de7
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Kernel Version:             5.10.207
	I1014 08:47:28.627902   15224 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Operating System:           linux
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Architecture:               amd64
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Container Runtime Version:  docker://27.3.1
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Kubelet Version:            v1.31.1
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Kube-Proxy Version:         v1.31.1
	I1014 08:47:28.627902   15224 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I1014 08:47:28.627902   15224 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I1014 08:47:28.627902   15224 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I1014 08:47:28.627902   15224 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I1014 08:47:28.627902   15224 command_runner.go:130] >   default                     busybox-7dff88458-bnqj6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I1014 08:47:28.627902   15224 command_runner.go:130] >   kube-system                 kindnet-rgbjf              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	I1014 08:47:28.627902   15224 command_runner.go:130] >   kube-system                 kube-proxy-kbpjf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I1014 08:47:28.627902   15224 command_runner.go:130] > Allocated resources:
	I1014 08:47:28.627902   15224 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Resource           Requests   Limits
	I1014 08:47:28.627902   15224 command_runner.go:130] >   --------           --------   ------
	I1014 08:47:28.627902   15224 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I1014 08:47:28.627902   15224 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I1014 08:47:28.627902   15224 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I1014 08:47:28.627902   15224 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I1014 08:47:28.627902   15224 command_runner.go:130] > Events:
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I1014 08:47:28.627902   15224 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-671000-m02 status is now: NodeHasSufficientMemory
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-671000-m02 status is now: NodeHasNoDiskPressure
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-671000-m02 status is now: NodeHasSufficientPID
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-671000-m02 event: Registered Node multinode-671000-m02 in Controller
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Normal  NodeReady                21m                kubelet          Node multinode-671000-m02 status is now: NodeReady
	I1014 08:47:28.627902   15224 command_runner.go:130] >   Normal  NodeNotReady             3m43s              node-controller  Node multinode-671000-m02 status is now: NodeNotReady
	I1014 08:47:28.628864   15224 command_runner.go:130] >   Normal  RegisteredNode           70s                node-controller  Node multinode-671000-m02 event: Registered Node multinode-671000-m02 in Controller
	I1014 08:47:28.648896   15224 command_runner.go:130] > Name:               multinode-671000-m03
	I1014 08:47:28.648896   15224 command_runner.go:130] > Roles:              <none>
	I1014 08:47:28.648896   15224 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I1014 08:47:28.648896   15224 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I1014 08:47:28.648896   15224 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I1014 08:47:28.648896   15224 command_runner.go:130] >                     kubernetes.io/hostname=multinode-671000-m03
	I1014 08:47:28.648896   15224 command_runner.go:130] >                     kubernetes.io/os=linux
	I1014 08:47:28.648896   15224 command_runner.go:130] >                     minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	I1014 08:47:28.648896   15224 command_runner.go:130] >                     minikube.k8s.io/name=multinode-671000
	I1014 08:47:28.648896   15224 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I1014 08:47:28.648896   15224 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_10_14T08_41_35_0700
	I1014 08:47:28.648896   15224 command_runner.go:130] >                     minikube.k8s.io/version=v1.34.0
	I1014 08:47:28.648896   15224 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I1014 08:47:28.648896   15224 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I1014 08:47:28.648896   15224 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I1014 08:47:28.648896   15224 command_runner.go:130] > CreationTimestamp:  Mon, 14 Oct 2024 15:41:34 +0000
	I1014 08:47:28.648896   15224 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I1014 08:47:28.648896   15224 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I1014 08:47:28.648896   15224 command_runner.go:130] > Unschedulable:      false
	I1014 08:47:28.648896   15224 command_runner.go:130] > Lease:
	I1014 08:47:28.648896   15224 command_runner.go:130] >   HolderIdentity:  multinode-671000-m03
	I1014 08:47:28.648896   15224 command_runner.go:130] >   AcquireTime:     <unset>
	I1014 08:47:28.648896   15224 command_runner.go:130] >   RenewTime:       Mon, 14 Oct 2024 15:42:46 +0000
	I1014 08:47:28.648896   15224 command_runner.go:130] > Conditions:
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I1014 08:47:28.648896   15224 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I1014 08:47:28.648896   15224 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 14 Oct 2024 15:41:53 +0000   Mon, 14 Oct 2024 15:43:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:28.648896   15224 command_runner.go:130] >   DiskPressure     Unknown   Mon, 14 Oct 2024 15:41:53 +0000   Mon, 14 Oct 2024 15:43:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:28.648896   15224 command_runner.go:130] >   PIDPressure      Unknown   Mon, 14 Oct 2024 15:41:53 +0000   Mon, 14 Oct 2024 15:43:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Ready            Unknown   Mon, 14 Oct 2024 15:41:53 +0000   Mon, 14 Oct 2024 15:43:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:28.648896   15224 command_runner.go:130] > Addresses:
	I1014 08:47:28.648896   15224 command_runner.go:130] >   InternalIP:  172.20.102.29
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Hostname:    multinode-671000-m03
	I1014 08:47:28.648896   15224 command_runner.go:130] > Capacity:
	I1014 08:47:28.648896   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:28.648896   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:28.648896   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:28.648896   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:28.648896   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:28.648896   15224 command_runner.go:130] > Allocatable:
	I1014 08:47:28.648896   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:28.648896   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:28.648896   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:28.648896   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:28.648896   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:28.648896   15224 command_runner.go:130] > System Info:
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Machine ID:                 6da8cf5e96c04d55b9129d0893534bf2
	I1014 08:47:28.648896   15224 command_runner.go:130] >   System UUID:                49616488-815a-3f43-8f47-13dbf29b6ca7
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Boot ID:                    d9fe58fb-ac8e-4430-9563-1b3e9fd35ffd
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Kernel Version:             5.10.207
	I1014 08:47:28.648896   15224 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Operating System:           linux
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Architecture:               amd64
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Container Runtime Version:  docker://27.3.1
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Kubelet Version:            v1.31.1
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Kube-Proxy Version:         v1.31.1
	I1014 08:47:28.648896   15224 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I1014 08:47:28.648896   15224 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I1014 08:47:28.648896   15224 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I1014 08:47:28.648896   15224 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I1014 08:47:28.648896   15224 command_runner.go:130] >   kube-system                 kindnet-5rqxq       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I1014 08:47:28.648896   15224 command_runner.go:130] >   kube-system                 kube-proxy-n6txs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I1014 08:47:28.648896   15224 command_runner.go:130] > Allocated resources:
	I1014 08:47:28.648896   15224 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Resource           Requests   Limits
	I1014 08:47:28.648896   15224 command_runner.go:130] >   --------           --------   ------
	I1014 08:47:28.648896   15224 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I1014 08:47:28.648896   15224 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I1014 08:47:28.648896   15224 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I1014 08:47:28.648896   15224 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I1014 08:47:28.648896   15224 command_runner.go:130] > Events:
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I1014 08:47:28.648896   15224 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Normal  Starting                 5m50s                  kube-proxy       
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-671000-m03 status is now: NodeHasSufficientMemory
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-671000-m03 status is now: NodeHasNoDiskPressure
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-671000-m03 status is now: NodeHasSufficientPID
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-671000-m03 status is now: NodeReady
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Normal  Starting                 5m54s                  kubelet          Starting kubelet.
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m54s (x2 over 5m54s)  kubelet          Node multinode-671000-m03 status is now: NodeHasSufficientMemory
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m54s (x2 over 5m54s)  kubelet          Node multinode-671000-m03 status is now: NodeHasNoDiskPressure
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m54s (x2 over 5m54s)  kubelet          Node multinode-671000-m03 status is now: NodeHasSufficientPID
	I1014 08:47:28.648896   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m54s                  kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:28.649865   15224 command_runner.go:130] >   Normal  RegisteredNode           5m49s                  node-controller  Node multinode-671000-m03 event: Registered Node multinode-671000-m03 in Controller
	I1014 08:47:28.649865   15224 command_runner.go:130] >   Normal  NodeReady                5m35s                  kubelet          Node multinode-671000-m03 status is now: NodeReady
	I1014 08:47:28.649865   15224 command_runner.go:130] >   Normal  NodeNotReady             3m59s                  node-controller  Node multinode-671000-m03 status is now: NodeNotReady
	I1014 08:47:28.649865   15224 command_runner.go:130] >   Normal  RegisteredNode           70s                    node-controller  Node multinode-671000-m03 event: Registered Node multinode-671000-m03 in Controller
	I1014 08:47:28.660875   15224 logs.go:123] Gathering logs for kube-controller-manager [712aad669c9f] ...
	I1014 08:47:28.660875   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712aad669c9f"
	I1014 08:47:28.689866   15224 command_runner.go:130] ! I1014 15:22:34.276457       1 serving.go:386] Generated self-signed cert in-memory
	I1014 08:47:28.690767   15224 command_runner.go:130] ! I1014 15:22:34.721812       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I1014 08:47:28.690767   15224 command_runner.go:130] ! I1014 15:22:34.722099       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:28.690964   15224 command_runner.go:130] ! I1014 15:22:34.724748       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1014 08:47:28.690964   15224 command_runner.go:130] ! I1014 15:22:34.725085       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 08:47:28.691055   15224 command_runner.go:130] ! I1014 15:22:34.725754       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1014 08:47:28.691884   15224 command_runner.go:130] ! I1014 15:22:34.725985       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 08:47:28.691884   15224 command_runner.go:130] ! I1014 15:22:39.207411       1 controllermanager.go:797] "Started controller" controller="serviceaccount-token-controller"
	I1014 08:47:28.692451   15224 command_runner.go:130] ! I1014 15:22:39.208026       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I1014 08:47:28.692740   15224 command_runner.go:130] ! I1014 15:22:39.207651       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I1014 08:47:28.692905   15224 command_runner.go:130] ! I1014 15:22:39.210064       1 shared_informer.go:320] Caches are synced for tokens
	I1014 08:47:28.692925   15224 command_runner.go:130] ! I1014 15:22:39.224528       1 controllermanager.go:797] "Started controller" controller="taint-eviction-controller"
	I1014 08:47:28.693559   15224 command_runner.go:130] ! I1014 15:22:39.224966       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I1014 08:47:28.693559   15224 command_runner.go:130] ! I1014 15:22:39.225213       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I1014 08:47:28.693559   15224 command_runner.go:130] ! I1014 15:22:39.226734       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I1014 08:47:28.693559   15224 command_runner.go:130] ! I1014 15:22:39.238395       1 controllermanager.go:797] "Started controller" controller="pod-garbage-collector-controller"
	I1014 08:47:28.693559   15224 command_runner.go:130] ! I1014 15:22:39.238610       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I1014 08:47:28.693559   15224 command_runner.go:130] ! I1014 15:22:39.239186       1 shared_informer.go:313] Waiting for caches to sync for GC
	I1014 08:47:28.693559   15224 command_runner.go:130] ! I1014 15:22:39.257957       1 controllermanager.go:797] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I1014 08:47:28.693559   15224 command_runner.go:130] ! I1014 15:22:39.258113       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I1014 08:47:28.694290   15224 command_runner.go:130] ! I1014 15:22:39.264110       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I1014 08:47:28.694317   15224 command_runner.go:130] ! I1014 15:22:39.291746       1 controllermanager.go:797] "Started controller" controller="token-cleaner-controller"
	I1014 08:47:28.694317   15224 command_runner.go:130] ! I1014 15:22:39.291968       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I1014 08:47:28.694461   15224 command_runner.go:130] ! I1014 15:22:39.292012       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I1014 08:47:28.694599   15224 command_runner.go:130] ! I1014 15:22:39.292035       1 shared_informer.go:320] Caches are synced for token_cleaner
	I1014 08:47:28.694825   15224 command_runner.go:130] ! E1014 15:22:39.298368       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I1014 08:47:28.694825   15224 command_runner.go:130] ! I1014 15:22:39.298490       1 controllermanager.go:775] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I1014 08:47:28.694825   15224 command_runner.go:130] ! I1014 15:22:39.320068       1 controllermanager.go:797] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I1014 08:47:28.694825   15224 command_runner.go:130] ! I1014 15:22:39.321579       1 pvc_protection_controller.go:105] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I1014 08:47:28.695209   15224 command_runner.go:130] ! I1014 15:22:39.322507       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I1014 08:47:28.695209   15224 command_runner.go:130] ! I1014 15:22:39.334562       1 controllermanager.go:797] "Started controller" controller="endpointslice-mirroring-controller"
	I1014 08:47:28.695209   15224 command_runner.go:130] ! I1014 15:22:39.335065       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I1014 08:47:28.695209   15224 command_runner.go:130] ! I1014 15:22:39.335174       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I1014 08:47:28.695209   15224 command_runner.go:130] ! I1014 15:22:39.357454       1 controllermanager.go:797] "Started controller" controller="serviceaccount-controller"
	I1014 08:47:28.695209   15224 command_runner.go:130] ! I1014 15:22:39.357636       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I1014 08:47:28.695209   15224 command_runner.go:130] ! I1014 15:22:39.357669       1 shared_informer.go:313] Waiting for caches to sync for service account
	I1014 08:47:28.695209   15224 command_runner.go:130] ! I1014 15:22:39.377687       1 controllermanager.go:797] "Started controller" controller="daemonset-controller"
	I1014 08:47:28.695209   15224 command_runner.go:130] ! I1014 15:22:39.378056       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I1014 08:47:28.695209   15224 command_runner.go:130] ! I1014 15:22:39.378087       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I1014 08:47:28.695209   15224 command_runner.go:130] ! I1014 15:22:39.416186       1 controllermanager.go:797] "Started controller" controller="disruption-controller"
	I1014 08:47:28.696194   15224 command_runner.go:130] ! I1014 15:22:39.416643       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I1014 08:47:28.696194   15224 command_runner.go:130] ! I1014 15:22:39.417022       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I1014 08:47:28.696194   15224 command_runner.go:130] ! I1014 15:22:39.417371       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I1014 08:47:28.696194   15224 command_runner.go:130] ! I1014 15:22:39.469032       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I1014 08:47:28.696194   15224 command_runner.go:130] ! I1014 15:22:39.469507       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I1014 08:47:28.696194   15224 command_runner.go:130] ! I1014 15:22:39.469770       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:28.696194   15224 command_runner.go:130] ! I1014 15:22:39.470779       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-signing-controller"
	I1014 08:47:28.696194   15224 command_runner.go:130] ! I1014 15:22:39.470793       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I1014 08:47:28.696194   15224 command_runner.go:130] ! I1014 15:22:39.471453       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I1014 08:47:28.696194   15224 command_runner.go:130] ! I1014 15:22:39.470805       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:28.696194   15224 command_runner.go:130] ! I1014 15:22:39.470829       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I1014 08:47:28.696194   15224 command_runner.go:130] ! I1014 15:22:39.471957       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I1014 08:47:28.696194   15224 command_runner.go:130] ! I1014 15:22:39.470841       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:28.696194   15224 command_runner.go:130] ! I1014 15:22:39.470861       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I1014 08:47:28.697225   15224 command_runner.go:130] ! I1014 15:22:39.472955       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I1014 08:47:28.697225   15224 command_runner.go:130] ! I1014 15:22:39.470870       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:28.697225   15224 command_runner.go:130] ! I1014 15:22:39.621859       1 controllermanager.go:797] "Started controller" controller="persistentvolume-binder-controller"
	I1014 08:47:28.697225   15224 command_runner.go:130] ! I1014 15:22:39.622638       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I1014 08:47:28.697225   15224 command_runner.go:130] ! I1014 15:22:39.623052       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I1014 08:47:28.697225   15224 command_runner.go:130] ! I1014 15:22:39.777984       1 controllermanager.go:797] "Started controller" controller="root-ca-certificate-publisher-controller"
	I1014 08:47:28.697225   15224 command_runner.go:130] ! I1014 15:22:39.778063       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I1014 08:47:28.697225   15224 command_runner.go:130] ! I1014 15:22:39.778141       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I1014 08:47:28.697225   15224 command_runner.go:130] ! I1014 15:22:39.918879       1 controllermanager.go:797] "Started controller" controller="replicationcontroller-controller"
	I1014 08:47:28.697225   15224 command_runner.go:130] ! I1014 15:22:39.919046       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I1014 08:47:28.697225   15224 command_runner.go:130] ! I1014 15:22:39.919060       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.166453       1 controllermanager.go:797] "Started controller" controller="garbage-collector-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.167822       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.168483       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.168745       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.423412       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.423795       1 controllermanager.go:797] "Started controller" controller="node-ipam-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.424239       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.424496       1 controllermanager.go:775] "Warning: skipping controller" controller="node-route-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.424173       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.425286       1 shared_informer.go:313] Waiting for caches to sync for node
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.570482       1 controllermanager.go:797] "Started controller" controller="persistentvolume-expander-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.570669       1 expand_controller.go:328] "Starting expand controller" logger="persistentvolume-expander-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.570684       1 shared_informer.go:313] Waiting for caches to sync for expand
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.718742       1 controllermanager.go:797] "Started controller" controller="ttl-after-finished-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.718766       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.718828       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.718839       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.875244       1 controllermanager.go:797] "Started controller" controller="endpointslice-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.875390       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:40.875405       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.022254       1 controllermanager.go:797] "Started controller" controller="replicaset-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.023099       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.023161       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.176342       1 controllermanager.go:797] "Started controller" controller="statefulset-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.176460       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.176471       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.319171       1 controllermanager.go:797] "Started controller" controller="persistentvolume-protection-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.319300       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.319332       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.469263       1 controllermanager.go:797] "Started controller" controller="clusterrole-aggregation-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.469488       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.470311       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.618471       1 controllermanager.go:797] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.618507       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.619582       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.813364       1 controllermanager.go:797] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:41.813412       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I1014 08:47:28.698198   15224 command_runner.go:130] ! I1014 15:22:42.123997       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.124656       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.125147       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.125502       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.125684       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.125715       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.125765       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.125789       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.125821       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.125850       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.125919       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.125938       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! W1014 15:22:42.125970       1 shared_informer.go:597] resyncPeriod 22h30m25.60471532s is smaller than resyncCheckPeriod 23h6m49.635307948s and the informer has already started. Changing it to 23h6m49.635307948s
	I1014 08:47:28.699203   15224 command_runner.go:130] ! W1014 15:22:42.126028       1 shared_informer.go:597] resyncPeriod 22h40m57.132720005s is smaller than resyncCheckPeriod 23h6m49.635307948s and the informer has already started. Changing it to 23h6m49.635307948s
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.126215       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.126353       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.126435       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.126461       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.126498       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.126514       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.126546       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.126572       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.126591       1 controllermanager.go:797] "Started controller" controller="resourcequota-controller"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.127139       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.127191       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.127239       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.377410       1 controllermanager.go:797] "Started controller" controller="namespace-controller"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.378109       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.378533       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.520088       1 controllermanager.go:797] "Started controller" controller="job-controller"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.520194       1 job_controller.go:226] "Starting job controller" logger="job-controller"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.520661       1 shared_informer.go:313] Waiting for caches to sync for job
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.669141       1 controllermanager.go:797] "Started controller" controller="ttl-controller"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.669227       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.669239       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.713738       1 node_lifecycle_controller.go:430] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.713795       1 controllermanager.go:797] "Started controller" controller="node-lifecycle-controller"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.713972       1 node_lifecycle_controller.go:464] "Sending events to api server" logger="node-lifecycle-controller"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.714019       1 node_lifecycle_controller.go:475] "Starting node controller" logger="node-lifecycle-controller"
	I1014 08:47:28.699203   15224 command_runner.go:130] ! I1014 15:22:42.714028       1 shared_informer.go:313] Waiting for caches to sync for taint
	I1014 08:47:28.699203   15224 command_runner.go:130] ! E1014 15:22:42.870353       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:42.870400       1 controllermanager.go:775] "Warning: skipping controller" controller="service-lb-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.022018       1 controllermanager.go:797] "Started controller" controller="persistentvolume-attach-detach-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.022670       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.022756       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.169053       1 controllermanager.go:797] "Started controller" controller="ephemeral-volume-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.169165       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.169572       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.319453       1 controllermanager.go:797] "Started controller" controller="endpoints-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.319620       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.319648       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.471065       1 controllermanager.go:797] "Started controller" controller="deployment-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.471807       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.472102       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.621382       1 controllermanager.go:797] "Started controller" controller="cronjob-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.621522       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.621537       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.663267       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-approving-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.663415       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.663427       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.822946       1 controllermanager.go:797] "Started controller" controller="bootstrap-signer-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.822992       1 controllermanager.go:775] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.823061       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.863507       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.863638       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.863659       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.902554       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.913563       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.916687       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000\" does not exist"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.921355       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.921578       1 shared_informer.go:320] Caches are synced for cronjob
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.921709       1 shared_informer.go:320] Caches are synced for job
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.921822       1 shared_informer.go:320] Caches are synced for PV protection
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.922806       1 shared_informer.go:320] Caches are synced for PVC protection
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.922814       1 shared_informer.go:320] Caches are synced for TTL after finished
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.924127       1 shared_informer.go:320] Caches are synced for persistent volume
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.924751       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.925596       1 shared_informer.go:320] Caches are synced for node
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.925653       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.925863       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.925961       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.925971       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.927918       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.933656       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.935993       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.939827       1 shared_informer.go:320] Caches are synced for GC
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.945652       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-671000" podCIDRs=["10.244.0.0/24"]
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.945733       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.946434       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.958217       1 shared_informer.go:320] Caches are synced for service account
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.964566       1 shared_informer.go:320] Caches are synced for HPA
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.970909       1 shared_informer.go:320] Caches are synced for TTL
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.971119       1 shared_informer.go:320] Caches are synced for ephemeral
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.971337       1 shared_informer.go:320] Caches are synced for expand
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.975501       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.976796       1 shared_informer.go:320] Caches are synced for stateful set
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.978344       1 shared_informer.go:320] Caches are synced for crt configmap
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.978435       1 shared_informer.go:320] Caches are synced for daemon sets
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:43.980084       1 shared_informer.go:320] Caches are synced for namespace
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:44.014728       1 shared_informer.go:320] Caches are synced for taint
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:44.015046       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1014 08:47:28.700202   15224 command_runner.go:130] ! I1014 15:22:44.015932       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.016156       1 node_lifecycle_controller.go:1036] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.020094       1 shared_informer.go:320] Caches are synced for endpoint
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.020640       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.071958       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.103447       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.118642       1 shared_informer.go:320] Caches are synced for disruption
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.123565       1 shared_informer.go:320] Caches are synced for attach detach
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.124082       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.128052       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.164601       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.170410       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.172085       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.172168       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.172762       1 shared_informer.go:320] Caches are synced for deployment
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.173998       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.583260       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.634360       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.669630       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:44.669841       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:45.450540       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="308.738304ms"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:45.524372       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="73.173482ms"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:45.524478       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="65.397µs"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:46.000395       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="74.724912ms"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:46.017930       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="16.329807ms"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:22:46.018255       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="275.988µs"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:23:06.558708       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:23:06.579629       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:23:06.601705       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="101.399µs"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:23:06.643522       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="58.099µs"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:23:08.868021       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="148.904µs"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:23:08.936155       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="33.695698ms"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:23:08.939220       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="3.012072ms"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:23:09.023157       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:23:10.921399       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:25:49.920125       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m02\" does not exist"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:25:49.955308       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-671000-m02" podCIDRs=["10.244.1.0/24"]
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:25:49.956041       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:25:49.956493       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:25:50.332394       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:25:50.885049       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:25:54.059204       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000-m02"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:25:54.342262       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:26:00.157293       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:26:18.720546       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:26:18.720611       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:26:18.738467       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:26:19.084143       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:26:20.411603       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:26:44.435156       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="75.721873ms"
	I1014 08:47:28.701186   15224 command_runner.go:130] ! I1014 15:26:44.496244       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="60.852418ms"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:26:44.496945       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="131.501µs"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:26:44.540742       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="117.6µs"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:26:47.680512       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.465591ms"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:26:47.680616       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="27.8µs"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:26:47.878633       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.308091ms"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:26:47.878779       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="30.7µs"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:26:50.724728       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:27:15.823577       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:30:35.115559       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:30:35.116078       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m03\" does not exist"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:30:35.128392       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-671000-m03" podCIDRs=["10.244.2.0/24"]
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:30:35.128677       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:30:35.128924       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:30:35.152829       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:30:35.373296       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:30:35.920577       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:30:39.132287       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:30:39.151825       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:30:45.490553       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:31:04.306000       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:31:04.306453       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:31:04.323636       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:31:05.841789       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:31:09.153752       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:31:56.911043       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:32:21.316935       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:36:11.719246       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:37:02.446841       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:37:26.676097       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:38:59.261991       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:38:59.262728       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:38:59.286871       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:39:04.424423       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:41:24.025444       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:41:24.063975       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:41:29.184402       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:41:29.185577       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:41:34.952323       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m03\" does not exist"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:41:34.952330       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:41:34.966125       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-671000-m03" podCIDRs=["10.244.3.0/24"]
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:41:34.966148       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:41:34.966505       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.702210   15224 command_runner.go:130] ! I1014 15:41:34.987165       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:41:35.003234       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:41:35.540526       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:41:39.448073       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:41:45.343875       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:41:53.719761       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:41:53.720945       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:41:53.741507       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:41:54.369330       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:42:08.557249       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:42:32.770970       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:43:29.631595       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:43:29.632207       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:43:29.853526       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:43:35.163131       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:43:45.119758       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:43:45.151031       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:43:45.251625       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.269341ms"
	I1014 08:47:28.703189   15224 command_runner.go:130] ! I1014 15:43:45.252472       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="121.1µs"
	I1014 08:47:28.724182   15224 logs.go:123] Gathering logs for Docker ...
	I1014 08:47:28.724182   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 08:47:28.757768   15224 command_runner.go:130] > Oct 14 15:44:44 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1014 08:47:28.757818   15224 command_runner.go:130] > Oct 14 15:44:44 minikube cri-dockerd[221]: time="2024-10-14T15:44:44Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I1014 08:47:28.757877   15224 command_runner.go:130] > Oct 14 15:44:44 minikube cri-dockerd[221]: time="2024-10-14T15:44:44Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1014 08:47:28.757877   15224 command_runner.go:130] > Oct 14 15:44:44 minikube cri-dockerd[221]: time="2024-10-14T15:44:44Z" level=info msg="Start docker client with request timeout 0s"
	I1014 08:47:28.757877   15224 command_runner.go:130] > Oct 14 15:44:44 minikube cri-dockerd[221]: time="2024-10-14T15:44:44Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1014 08:47:28.758090   15224 command_runner.go:130] > Oct 14 15:44:45 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1014 08:47:28.758893   15224 command_runner.go:130] > Oct 14 15:44:45 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1014 08:47:28.758893   15224 command_runner.go:130] > Oct 14 15:44:45 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:47 minikube cri-dockerd[413]: time="2024-10-14T15:44:47Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:47 minikube cri-dockerd[413]: time="2024-10-14T15:44:47Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:47 minikube cri-dockerd[413]: time="2024-10-14T15:44:47Z" level=info msg="Start docker client with request timeout 0s"
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:47 minikube cri-dockerd[413]: time="2024-10-14T15:44:47Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:49 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:49 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:49 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:50 minikube cri-dockerd[421]: time="2024-10-14T15:44:50Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:50 minikube cri-dockerd[421]: time="2024-10-14T15:44:50Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1014 08:47:28.759063   15224 command_runner.go:130] > Oct 14 15:44:50 minikube cri-dockerd[421]: time="2024-10-14T15:44:50Z" level=info msg="Start docker client with request timeout 0s"
	I1014 08:47:28.759617   15224 command_runner.go:130] > Oct 14 15:44:50 minikube cri-dockerd[421]: time="2024-10-14T15:44:50Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1014 08:47:28.759617   15224 command_runner.go:130] > Oct 14 15:44:50 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1014 08:47:28.759667   15224 command_runner.go:130] > Oct 14 15:44:50 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1014 08:47:28.759667   15224 command_runner.go:130] > Oct 14 15:44:50 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1014 08:47:28.759667   15224 command_runner.go:130] > Oct 14 15:44:52 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I1014 08:47:28.759667   15224 command_runner.go:130] > Oct 14 15:44:52 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1014 08:47:28.759667   15224 command_runner.go:130] > Oct 14 15:44:52 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I1014 08:47:28.759667   15224 command_runner.go:130] > Oct 14 15:44:52 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1014 08:47:28.759667   15224 command_runner.go:130] > Oct 14 15:44:52 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1014 08:47:28.759916   15224 command_runner.go:130] > Oct 14 15:45:33 multinode-671000 systemd[1]: Starting Docker Application Container Engine...
	I1014 08:47:28.759916   15224 command_runner.go:130] > Oct 14 15:45:33 multinode-671000 dockerd[649]: time="2024-10-14T15:45:33.956984837Z" level=info msg="Starting up"
	I1014 08:47:28.760005   15224 command_runner.go:130] > Oct 14 15:45:33 multinode-671000 dockerd[649]: time="2024-10-14T15:45:33.957924243Z" level=info msg="containerd not running, starting managed containerd"
	I1014 08:47:28.760108   15224 command_runner.go:130] > Oct 14 15:45:33 multinode-671000 dockerd[649]: time="2024-10-14T15:45:33.959335951Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=655
	I1014 08:47:28.760151   15224 command_runner.go:130] > Oct 14 15:45:33 multinode-671000 dockerd[655]: time="2024-10-14T15:45:33.994773864Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	I1014 08:47:28.760222   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.020772213Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I1014 08:47:28.760455   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.021015015Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I1014 08:47:28.760935   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.021095615Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I1014 08:47:28.761054   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.021147816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:28.761054   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.021828519Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:28.761054   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.021976120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:28.761119   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.022248222Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:28.761119   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.022376622Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:28.761181   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.022401523Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:28.761207   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.022414623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:28.761237   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.023030126Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:28.761237   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.023715230Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:28.761275   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.027058949Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:28.761316   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.027212250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:28.761356   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.027346050Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:28.761356   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.027434351Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I1014 08:47:28.761417   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.028070055Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I1014 08:47:28.761459   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.028254556Z" level=info msg="metadata content store policy set" policy=shared
	I1014 08:47:28.761510   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.033722086Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I1014 08:47:28.761510   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.033900187Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I1014 08:47:28.761510   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.033927888Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I1014 08:47:28.761510   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.033944088Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I1014 08:47:28.761587   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.033959488Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I1014 08:47:28.761615   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.034029088Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I1014 08:47:28.761615   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.034638992Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I1014 08:47:28.761615   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.034898493Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I1014 08:47:28.761615   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.034993394Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I1014 08:47:28.761685   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035025394Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I1014 08:47:28.761714   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035042394Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.761714   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035056394Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.761714   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035070894Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.761714   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035091294Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.761814   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035125794Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.761814   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035139394Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.761814   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035152195Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.761874   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035200795Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.761900   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035227495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035242395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035255095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035268595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035283595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035296895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035314495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035330096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035343596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035364096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035376796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035388896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035401196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035419096Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035441896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035454496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035465896Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035512897Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035554297Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035568497Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I1014 08:47:28.761930   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035580597Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I1014 08:47:28.762662   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035590797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.762736   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035604297Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I1014 08:47:28.762736   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035619397Z" level=info msg="NRI interface is disabled by configuration."
	I1014 08:47:28.762804   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035934999Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I1014 08:47:28.762873   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.036229901Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I1014 08:47:28.762873   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.036295501Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I1014 08:47:28.762873   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.036322201Z" level=info msg="containerd successfully booted in 0.043787s"
	I1014 08:47:28.762949   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.016752326Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1014 08:47:28.762949   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.204043816Z" level=info msg="Loading containers: start."
	I1014 08:47:28.762949   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.545951324Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1014 08:47:28.763041   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.688138626Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I1014 08:47:28.763041   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.780023455Z" level=info msg="Loading containers: done."
	I1014 08:47:28.763109   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.809569125Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I1014 08:47:28.763109   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.809610125Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I1014 08:47:28.763168   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.809633825Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.810490930Z" level=info msg="Daemon has completed initialization"
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.853736479Z" level=info msg="API listen on /var/run/docker.sock"
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.854139881Z" level=info msg="API listen on [::]:2376"
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 systemd[1]: Started Docker Application Container Engine.
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 systemd[1]: Stopping Docker Application Container Engine...
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 dockerd[649]: time="2024-10-14T15:46:01.049459779Z" level=info msg="Processing signal 'terminated'"
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 dockerd[649]: time="2024-10-14T15:46:01.053392981Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 dockerd[649]: time="2024-10-14T15:46:01.053568081Z" level=info msg="Daemon shutdown complete"
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 dockerd[649]: time="2024-10-14T15:46:01.053889681Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 dockerd[649]: time="2024-10-14T15:46:01.054172781Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 systemd[1]: docker.service: Deactivated successfully.
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 systemd[1]: Stopped Docker Application Container Engine.
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 systemd[1]: Starting Docker Application Container Engine...
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:02.109177376Z" level=info msg="Starting up"
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:02.110667577Z" level=info msg="containerd not running, starting managed containerd"
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:02.112008177Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1093
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.143199292Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168149004Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168197704Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168231304Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168244704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:28.763188   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168266504Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:28.763758   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168317904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:28.763812   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168445004Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:28.763914   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168531404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:28.763914   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168550204Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:28.763914   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168561104Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:28.764036   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168583904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:28.764036   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168690904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:28.764102   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.171907506Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:28.764102   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172002906Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:28.764170   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172175606Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:28.764170   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172377606Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I1014 08:47:28.764170   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172424606Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I1014 08:47:28.764246   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172461506Z" level=info msg="metadata content store policy set" policy=shared
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172795106Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172882406Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172902406Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172916306Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172930506Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172992206Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173380806Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173626906Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173734806Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173758306Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173794906Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173849506Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173864606Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173878206Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173900507Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173916207Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173928607Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173940507Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173959407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173973007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.764298   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173985207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173998307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174010307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174023407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174035407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174047207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174077107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174095807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174107607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174191507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174206607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174229207Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174259307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174352207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174370407Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174499607Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174541907Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174556007Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I1014 08:47:28.764810   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174568207Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I1014 08:47:28.765795   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174578207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I1014 08:47:28.765795   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174598407Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I1014 08:47:28.765795   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174612107Z" level=info msg="NRI interface is disabled by configuration."
	I1014 08:47:28.765795   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174893107Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I1014 08:47:28.765795   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.175192307Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I1014 08:47:28.765795   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.175271607Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I1014 08:47:28.765795   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.175364007Z" level=info msg="containerd successfully booted in 0.032943s"
	I1014 08:47:28.765795   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.157176768Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1014 08:47:28.765795   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.188626383Z" level=info msg="Loading containers: start."
	I1014 08:47:28.765795   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.419822091Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1014 08:47:28.765795   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.533275144Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I1014 08:47:28.765795   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.631380390Z" level=info msg="Loading containers: done."
	I1014 08:47:28.765795   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.656005002Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.656245502Z" level=info msg="Daemon has completed initialization"
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.695426820Z" level=info msg="API listen on /var/run/docker.sock"
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 systemd[1]: Started Docker Application Container Engine.
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.695638120Z" level=info msg="API listen on [::]:2376"
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Start docker client with request timeout 0s"
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Loaded network plugin cni"
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Setting cgroupDriver cgroupfs"
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Start cri-dockerd grpc backend"
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:09Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7c65d6cfc9-fs9ct_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"2f8cc9a218fef4446839b0253827fc2dd89f8eccbd66ab7f0e5123c033f0aacc\""
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:09Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-7dff88458-vlp7j_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"06e529266db4b2cf3ad09285cddeb89d7d11e858b5156cf73f41198c9500b9f2\""
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.635579177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.635817077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.635919877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.636083677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.762883836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.763092036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.763114536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.765440937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1d3033f871fb11cb3095bcf5c5d43615de9685372a45edf226fe52b2f482bc71/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.846488476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.847106376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.847254676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.854373579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.883112593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.884477393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.884514293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.884605993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.766807   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6155e8be2d5d725a4259a45fe10f7ceb3fc581d528a6486633b563a59f331127/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7bd4c36606eefc91e9ae07ea5683536fc78fdb6f7f752f44d28787b88540a878/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.061102976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.061201476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.061221876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.061393176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0697a11790e806fa4d679e3d97be4c7193692d1cb8f76d882cf3a75aa8e0c238/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.312465294Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.312610794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.312646494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.312762794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.422697746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.422797746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.422816346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.423001046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.500801282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.501016583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.501037383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.504117984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:15Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.472267615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.472571215Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.472597215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.472873315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.475833517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.476013017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.476107917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.476358717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.515050835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.515249635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.515393835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.515565835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6f8bdf552734e59df94d489a081585cdaeeaee729f0a0ae92105117d4d744f3f/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cdcdd532ba1369fe0e866b321f18e0bd99a88a2699c325eea17a070d48f2f024/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:28.767796   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.911588321Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.913177522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.913368522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.914060722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7bcadf1f0885ff3411b477a64c6af366c77eb264ce7a761f174b079b11e2e703/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.063841193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.063929693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.063946093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.064242693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.206735160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.207544260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.208633061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.224429668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:47.556424473Z" level=info msg="ignoring event" container=c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:47.559859508Z" level=info msg="shim disconnected" id=c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6 namespace=moby
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:47.560270512Z" level=warning msg="cleaning up after shim disconnected" id=c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6 namespace=moby
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:47.560505714Z" level=info msg="cleaning up dead shim" namespace=moby
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:03 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:03.070959923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:03 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:03.071176624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:03 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:03.071240924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:03 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:03.071756926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.071716036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.071943436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.071968036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.072116937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:47:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/429b989a1a986d23a2e5aee0de1aef1e683a014bebb587981622bd80a3ac5221/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.295865797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.295993998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.296019698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.296117898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:47:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9092d17516eb35243fd461a360605e738727838ee50f870f3bd6c290fd061d20/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.536751498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.537062099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.537100499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.537246499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.768794   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.821494873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:28.769793   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.821592273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:28.769793   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.821611273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.769793   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.821730874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:28.796399   15224 logs.go:123] Gathering logs for kube-proxy [e83db276dec3] ...
	I1014 08:47:28.796399   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83db276dec3"
	I1014 08:47:28.824931   15224 command_runner.go:130] ! I1014 15:46:17.821967       1 server_linux.go:66] "Using iptables proxy"
	I1014 08:47:28.825159   15224 command_runner.go:130] ! E1014 15:46:17.985243       1 proxier.go:734] "Error cleaning up nftables rules" err=<
	I1014 08:47:28.825159   15224 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I1014 08:47:28.825159   15224 command_runner.go:130] ! 	add table ip kube-proxy
	I1014 08:47:28.825215   15224 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I1014 08:47:28.825215   15224 command_runner.go:130] !  >
	I1014 08:47:28.825215   15224 command_runner.go:130] ! E1014 15:46:18.020523       1 proxier.go:734] "Error cleaning up nftables rules" err=<
	I1014 08:47:28.825215   15224 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I1014 08:47:28.825351   15224 command_runner.go:130] ! 	add table ip6 kube-proxy
	I1014 08:47:28.825351   15224 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I1014 08:47:28.825351   15224 command_runner.go:130] !  >
	I1014 08:47:28.825351   15224 command_runner.go:130] ! I1014 15:46:18.173230       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.20.106.123"]
	I1014 08:47:28.825451   15224 command_runner.go:130] ! E1014 15:46:18.173392       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 08:47:28.825451   15224 command_runner.go:130] ! I1014 15:46:18.286207       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 08:47:28.825451   15224 command_runner.go:130] ! I1014 15:46:18.287289       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 08:47:28.825531   15224 command_runner.go:130] ! I1014 15:46:18.287905       1 server_linux.go:169] "Using iptables Proxier"
	I1014 08:47:28.825575   15224 command_runner.go:130] ! I1014 15:46:18.293792       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 08:47:28.825593   15224 command_runner.go:130] ! I1014 15:46:18.300740       1 server.go:483] "Version info" version="v1.31.1"
	I1014 08:47:28.825593   15224 command_runner.go:130] ! I1014 15:46:18.300778       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:28.825593   15224 command_runner.go:130] ! I1014 15:46:18.305824       1 config.go:199] "Starting service config controller"
	I1014 08:47:28.825648   15224 command_runner.go:130] ! I1014 15:46:18.308209       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 08:47:28.825769   15224 command_runner.go:130] ! I1014 15:46:18.308868       1 config.go:328] "Starting node config controller"
	I1014 08:47:28.825769   15224 command_runner.go:130] ! I1014 15:46:18.314183       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 08:47:28.825769   15224 command_runner.go:130] ! I1014 15:46:18.309398       1 config.go:105] "Starting endpoint slice config controller"
	I1014 08:47:28.825769   15224 command_runner.go:130] ! I1014 15:46:18.317842       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 08:47:28.825837   15224 command_runner.go:130] ! I1014 15:46:18.419882       1 shared_informer.go:320] Caches are synced for node config
	I1014 08:47:28.825837   15224 command_runner.go:130] ! I1014 15:46:18.419918       1 shared_informer.go:320] Caches are synced for service config
	I1014 08:47:28.825837   15224 command_runner.go:130] ! I1014 15:46:18.435586       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 08:47:31.328895   15224 api_server.go:253] Checking apiserver healthz at https://172.20.106.123:8443/healthz ...
	I1014 08:47:31.337693   15224 api_server.go:279] https://172.20.106.123:8443/healthz returned 200:
	ok
	I1014 08:47:31.337875   15224 round_trippers.go:463] GET https://172.20.106.123:8443/version
	I1014 08:47:31.337875   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:31.337875   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:31.337875   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:31.339782   15224 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1014 08:47:31.339892   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:31.339892   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:31.339892   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:31.339892   15224 round_trippers.go:580]     Content-Length: 263
	I1014 08:47:31.339892   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:31 GMT
	I1014 08:47:31.339892   15224 round_trippers.go:580]     Audit-Id: 2fb19e10-d3f3-4081-a9fc-50ab014bc482
	I1014 08:47:31.339892   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:31.339892   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:31.339892   15224 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1014 08:47:31.340067   15224 api_server.go:141] control plane version: v1.31.1
	I1014 08:47:31.340067   15224 api_server.go:131] duration metric: took 3.711038s to wait for apiserver health ...
	I1014 08:47:31.340067   15224 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 08:47:31.350071   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1014 08:47:31.375786   15224 command_runner.go:130] > a834664fc8b8
	I1014 08:47:31.375786   15224 logs.go:282] 1 containers: [a834664fc8b8]
	I1014 08:47:31.385213   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1014 08:47:31.411121   15224 command_runner.go:130] > 48c8492e231e
	I1014 08:47:31.411121   15224 logs.go:282] 1 containers: [48c8492e231e]
	I1014 08:47:31.420416   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1014 08:47:31.445869   15224 command_runner.go:130] > 5d223e2e64fc
	I1014 08:47:31.445869   15224 command_runner.go:130] > d9831e9f8ce8
	I1014 08:47:31.445955   15224 logs.go:282] 2 containers: [5d223e2e64fc d9831e9f8ce8]
	I1014 08:47:31.455088   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1014 08:47:31.477326   15224 command_runner.go:130] > d428685276e1
	I1014 08:47:31.477410   15224 command_runner.go:130] > 661e75bbf6b4
	I1014 08:47:31.477410   15224 logs.go:282] 2 containers: [d428685276e1 661e75bbf6b4]
	I1014 08:47:31.485447   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1014 08:47:31.509014   15224 command_runner.go:130] > e83db276dec3
	I1014 08:47:31.509014   15224 command_runner.go:130] > ea19428d7036
	I1014 08:47:31.509014   15224 logs.go:282] 2 containers: [e83db276dec3 ea19428d7036]
	I1014 08:47:31.518018   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1014 08:47:31.539818   15224 command_runner.go:130] > 8af48c446f7e
	I1014 08:47:31.539818   15224 command_runner.go:130] > 712aad669c9f
	I1014 08:47:31.539818   15224 logs.go:282] 2 containers: [8af48c446f7e 712aad669c9f]
	I1014 08:47:31.548793   15224 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1014 08:47:31.570120   15224 command_runner.go:130] > bba035362eb9
	I1014 08:47:31.570120   15224 command_runner.go:130] > fcdf89a3ac8c
	I1014 08:47:31.570120   15224 logs.go:282] 2 containers: [bba035362eb9 fcdf89a3ac8c]
	I1014 08:47:31.570120   15224 logs.go:123] Gathering logs for kubelet ...
	I1014 08:47:31.570120   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 08:47:31.600080   15224 command_runner.go:130] > Oct 14 15:46:05 multinode-671000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1014 08:47:31.600214   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1480]: I1014 15:46:06.037054    1480 server.go:486] "Kubelet version" kubeletVersion="v1.31.1"
	I1014 08:47:31.600214   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1480]: I1014 15:46:06.037147    1480 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:31.600214   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1480]: I1014 15:46:06.038385    1480 server.go:929] "Client rotation is on, will bootstrap in background"
	I1014 08:47:31.600282   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1480]: E1014 15:46:06.039788    1480 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I1014 08:47:31.600308   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I1014 08:47:31.600308   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I1014 08:47:31.600308   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I1014 08:47:31.600400   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1014 08:47:31.600400   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1014 08:47:31.600461   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1540]: I1014 15:46:06.835721    1540 server.go:486] "Kubelet version" kubeletVersion="v1.31.1"
	I1014 08:47:31.600485   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1540]: I1014 15:46:06.835931    1540 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:31.600485   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1540]: I1014 15:46:06.836250    1540 server.go:929] "Client rotation is on, will bootstrap in background"
	I1014 08:47:31.600485   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 kubelet[1540]: E1014 15:46:06.836463    1540 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I1014 08:47:31.600485   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I1014 08:47:31.600591   15224 command_runner.go:130] > Oct 14 15:46:06 multinode-671000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I1014 08:47:31.600591   15224 command_runner.go:130] > Oct 14 15:46:07 multinode-671000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I1014 08:47:31.600591   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I1014 08:47:31.600591   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.687712    1622 server.go:486] "Kubelet version" kubeletVersion="v1.31.1"
	I1014 08:47:31.600686   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.688474    1622 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:31.600686   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.689105    1622 server.go:929] "Client rotation is on, will bootstrap in background"
	I1014 08:47:31.600748   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.691939    1622 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I1014 08:47:31.600773   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.718455    1622 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.739709    1622 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.739760    1622 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.744155    1622 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.744395    1622 server.go:812] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.744486    1622 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority"
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.744668    1622 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.744761    1622 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-671000","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.746378    1622 topology_manager.go:138] "Creating topology manager with none policy"
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.746460    1622 container_manager_linux.go:300] "Creating device plugin manager"
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.746633    1622 state_mem.go:36] "Initialized new in-memory state store"
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.749964    1622 kubelet.go:408] "Attempting to sync node with API server"
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.750004    1622 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests"
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.750036    1622 kubelet.go:314] "Adding apiserver pod source"
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.750844    1622 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.756693    1622 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="docker" version="27.3.1" apiVersion="v1"
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.763816    1622 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: W1014 15:46:09.764725    1622 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.766925    1622 server.go:1269] "Started kubelet"
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: W1014 15:46:09.767088    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-671000&limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:31.600803   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.767172    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-671000&limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:31.601403   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: W1014 15:46:09.769189    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:31.601403   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.769350    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:31.601403   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.769454    1622 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I1014 08:47:31.601403   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.770134    1622 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I1014 08:47:31.601604   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.772237    1622 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.20.106.123:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-671000.17fe5c47a6bff791  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-671000,UID:multinode-671000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-671000,},FirstTimestamp:2024-10-14 15:46:09.766881169 +0000 UTC m=+0.158153075,LastTimestamp:2024-10-14 15:46:09.766881169 +0000 UTC m=+0.158153075,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-6
71000,}"
	I1014 08:47:31.601604   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.773096    1622 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I1014 08:47:31.601604   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.774576    1622 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I1014 08:47:31.601703   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.777686    1622 server.go:460] "Adding debug handlers to kubelet server"
	I1014 08:47:31.601727   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.780950    1622 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.788697    1622 volume_manager.go:289] "Starting Kubelet Volume Manager"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.789003    1622 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"multinode-671000\" not found"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.789640    1622 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.800447    1622 factory.go:221] Registration of the systemd container factory successfully
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.800536    1622 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.800587    1622 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: W1014 15:46:09.811192    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.811498    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.812017    1622 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-671000?timeout=10s\": dial tcp 172.20.106.123:8443: connect: connection refused" interval="200ms"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.863497    1622 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.863530    1622 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.863554    1622 state_mem.go:36] "Initialized new in-memory state store"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.868881    1622 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.868953    1622 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.868995    1622 policy_none.go:49] "None policy: Start"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.872200    1622 reconciler.go:26] "Reconciler: start to sync state"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.877834    1622 memory_manager.go:170] "Starting memorymanager" policy="None"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.877929    1622 state_mem.go:35] "Initializing new in-memory state store"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.878704    1622 state_mem.go:75] "Updated machine memory state"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.884555    1622 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.885687    1622 eviction_manager.go:189] "Eviction manager: starting control loop"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.885828    1622 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.889524    1622 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-671000\" not found"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.892062    1622 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.900012    1622 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.905094    1622 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.906277    1622 status_manager.go:217] "Starting to sync pod status with apiserver"
	I1014 08:47:31.601755   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.906885    1622 kubelet.go:2321] "Starting kubelet main sync loop"
	I1014 08:47:31.602288   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.907458    1622 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
	I1014 08:47:31.602288   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: W1014 15:46:09.914061    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:31.602472   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.914371    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:31.602528   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.933056    1622 iptables.go:577] "Could not set up iptables canary" err=<
	I1014 08:47:31.602528   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I1014 08:47:31.602528   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I1014 08:47:31.602631   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I1014 08:47:31.602659   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I1014 08:47:31.602688   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: I1014 15:46:09.987581    1622 kubelet_node_status.go:72] "Attempting to register node" node="multinode-671000"
	I1014 08:47:31.602719   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 kubelet[1622]: E1014 15:46:09.988812    1622 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.20.106.123:8443: connect: connection refused" node="multinode-671000"
	I1014 08:47:31.602748   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.008458    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f8cc9a218fef4446839b0253827fc2dd89f8eccbd66ab7f0e5123c033f0aacc"
	I1014 08:47:31.602748   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.013887    1622 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-671000?timeout=10s\": dial tcp 172.20.106.123:8443: connect: connection refused" interval="400ms"
	I1014 08:47:31.602748   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.014354    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5733d27d2f1c328dbd19f6392a86e426f344b6f17c65211404fa797e84b69c9"
	I1014 08:47:31.602748   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.014436    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06e529266db4b2cf3ad09285cddeb89d7d11e858b5156cf73f41198c9500b9f2"
	I1014 08:47:31.602748   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.014506    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e48ddcfdf90ad3bfbe621f27c97a331f448947ca77dbd98ab3c9daef2c84e22"
	I1014 08:47:31.602748   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.020161    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dc78387553ff4b78626f5e6aa103a40ec97f42ef49363e27d7d3698cd0df26f"
	I1014 08:47:31.602748   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.035902    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c6be2bd1889b6c5f021362c07c3a88f7f0ff266bb9e8ba4106d666b0f1d267d"
	I1014 08:47:31.602748   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.049024    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1863de70f2316e54fa61ef7c5c6aba94808669b81b1cc811dce745011ee807cb"
	I1014 08:47:31.602748   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.065264    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7144d8ce208cf8c176ad1fc9980a72d450a3d558c4f8f9ee453dea6b22358085"
	I1014 08:47:31.602748   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.079145    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfdde08319e32b93d740933d5ab50829de8f9f3edacce92efe155b4ada4f4212"
	I1014 08:47:31.602748   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.179820    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/864b69f35cb25e9dd5d87a753a055a10-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-671000\" (UID: \"864b69f35cb25e9dd5d87a753a055a10\") " pod="kube-system/kube-apiserver-multinode-671000"
	I1014 08:47:31.602748   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.179915    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b83e31864e0ae98d29d960866012ecb0-k8s-certs\") pod \"kube-controller-manager-multinode-671000\" (UID: \"b83e31864e0ae98d29d960866012ecb0\") " pod="kube-system/kube-controller-manager-multinode-671000"
	I1014 08:47:31.602748   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.179945    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b83e31864e0ae98d29d960866012ecb0-kubeconfig\") pod \"kube-controller-manager-multinode-671000\" (UID: \"b83e31864e0ae98d29d960866012ecb0\") " pod="kube-system/kube-controller-manager-multinode-671000"
	I1014 08:47:31.602748   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.179963    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3e987cfaedc75c39145e8fc131c60c81-kubeconfig\") pod \"kube-scheduler-multinode-671000\" (UID: \"3e987cfaedc75c39145e8fc131c60c81\") " pod="kube-system/kube-scheduler-multinode-671000"
	I1014 08:47:31.602748   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.179984    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/679486a8f27de5805bc2e87fb1920dce-etcd-certs\") pod \"etcd-multinode-671000\" (UID: \"679486a8f27de5805bc2e87fb1920dce\") " pod="kube-system/etcd-multinode-671000"
	I1014 08:47:31.603277   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180012    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/679486a8f27de5805bc2e87fb1920dce-etcd-data\") pod \"etcd-multinode-671000\" (UID: \"679486a8f27de5805bc2e87fb1920dce\") " pod="kube-system/etcd-multinode-671000"
	I1014 08:47:31.603339   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180036    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/864b69f35cb25e9dd5d87a753a055a10-ca-certs\") pod \"kube-apiserver-multinode-671000\" (UID: \"864b69f35cb25e9dd5d87a753a055a10\") " pod="kube-system/kube-apiserver-multinode-671000"
	I1014 08:47:31.603339   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180050    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/864b69f35cb25e9dd5d87a753a055a10-k8s-certs\") pod \"kube-apiserver-multinode-671000\" (UID: \"864b69f35cb25e9dd5d87a753a055a10\") " pod="kube-system/kube-apiserver-multinode-671000"
	I1014 08:47:31.603432   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180068    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b83e31864e0ae98d29d960866012ecb0-ca-certs\") pod \"kube-controller-manager-multinode-671000\" (UID: \"b83e31864e0ae98d29d960866012ecb0\") " pod="kube-system/kube-controller-manager-multinode-671000"
	I1014 08:47:31.603432   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180089    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b83e31864e0ae98d29d960866012ecb0-flexvolume-dir\") pod \"kube-controller-manager-multinode-671000\" (UID: \"b83e31864e0ae98d29d960866012ecb0\") " pod="kube-system/kube-controller-manager-multinode-671000"
	I1014 08:47:31.603520   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.180113    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b83e31864e0ae98d29d960866012ecb0-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-671000\" (UID: \"b83e31864e0ae98d29d960866012ecb0\") " pod="kube-system/kube-controller-manager-multinode-671000"
	I1014 08:47:31.603548   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.191857    1622 kubelet_node_status.go:72] "Attempting to register node" node="multinode-671000"
	I1014 08:47:31.603593   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.193195    1622 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.20.106.123:8443: connect: connection refused" node="multinode-671000"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.421148    1622 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-671000?timeout=10s\": dial tcp 172.20.106.123:8443: connect: connection refused" interval="800ms"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: I1014 15:46:10.595286    1622 kubelet_node_status.go:72] "Attempting to register node" node="multinode-671000"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.596178    1622 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.20.106.123:8443: connect: connection refused" node="multinode-671000"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: W1014 15:46:10.601172    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.601259    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: W1014 15:46:10.913794    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 kubelet[1622]: E1014 15:46:10.913870    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: W1014 15:46:11.078571    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: E1014 15:46:11.078638    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: W1014 15:46:11.151154    1622 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-671000&limit=500&resourceVersion=0": dial tcp 172.20.106.123:8443: connect: connection refused
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: E1014 15:46:11.151247    1622 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-671000&limit=500&resourceVersion=0\": dial tcp 172.20.106.123:8443: connect: connection refused" logger="UnhandledError"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: E1014 15:46:11.223425    1622 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-671000?timeout=10s\": dial tcp 172.20.106.123:8443: connect: connection refused" interval="1.6s"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: I1014 15:46:11.306759    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0697a11790e806fa4d679e3d97be4c7193692d1cb8f76d882cf3a75aa8e0c238"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: I1014 15:46:11.397496    1622 kubelet_node_status.go:72] "Attempting to register node" node="multinode-671000"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 kubelet[1622]: E1014 15:46:11.399409    1622 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.20.106.123:8443: connect: connection refused" node="multinode-671000"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:13 multinode-671000 kubelet[1622]: I1014 15:46:13.001489    1622 kubelet_node_status.go:72] "Attempting to register node" node="multinode-671000"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.316022    1622 kubelet_node_status.go:111] "Node was previously registered" node="multinode-671000"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.316194    1622 kubelet_node_status.go:75] "Successfully registered node" node="multinode-671000"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.316226    1622 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I1014 08:47:31.603631   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.317405    1622 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I1014 08:47:31.604182   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.318741    1622 setters.go:600] "Node became not ready" node="multinode-671000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-10-14T15:46:15Z","lastTransitionTime":"2024-10-14T15:46:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I1014 08:47:31.604225   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.671751    1622 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-multinode-671000\" already exists" pod="kube-system/etcd-multinode-671000"
	I1014 08:47:31.604225   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.765668    1622 apiserver.go:52] "Watching apiserver"
	I1014 08:47:31.604225   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.771464    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.604328   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.772813    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.604415   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.774456    1622 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-671000" podUID="80ea37b8-9db1-4a39-9e9e-51c01edadfb1"
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.790436    1622 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.804744    1622 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-671000"
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.875635    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8d14473-8859-4015-84e9-d00656cc00c9-xtables-lock\") pod \"kube-proxy-r74dx\" (UID: \"f8d14473-8859-4015-84e9-d00656cc00c9\") " pod="kube-system/kube-proxy-r74dx"
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.875831    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a508bbf9-7565-4c73-98cb-e9684985c298-xtables-lock\") pod \"kindnet-wqrx6\" (UID: \"a508bbf9-7565-4c73-98cb-e9684985c298\") " pod="kube-system/kindnet-wqrx6"
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.876217    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8d14473-8859-4015-84e9-d00656cc00c9-lib-modules\") pod \"kube-proxy-r74dx\" (UID: \"f8d14473-8859-4015-84e9-d00656cc00c9\") " pod="kube-system/kube-proxy-r74dx"
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.876424    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fde8ff75-bc7f-4db4-b098-c3a08b38d205-tmp\") pod \"storage-provisioner\" (UID: \"fde8ff75-bc7f-4db4-b098-c3a08b38d205\") " pod="kube-system/storage-provisioner"
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.876537    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a508bbf9-7565-4c73-98cb-e9684985c298-cni-cfg\") pod \"kindnet-wqrx6\" (UID: \"a508bbf9-7565-4c73-98cb-e9684985c298\") " pod="kube-system/kindnet-wqrx6"
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.876562    1622 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a508bbf9-7565-4c73-98cb-e9684985c298-lib-modules\") pod \"kindnet-wqrx6\" (UID: \"a508bbf9-7565-4c73-98cb-e9684985c298\") " pod="kube-system/kindnet-wqrx6"
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.877886    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.877952    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:16.377930736 +0000 UTC m=+6.769202642 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.896550    1622 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.904462    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.904557    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: E1014 15:46:15.904737    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:16.404658149 +0000 UTC m=+6.795930055 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.919872    1622 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cf38ccc62eb74f6e658e1f66ae8cab1" path="/var/lib/kubelet/pods/3cf38ccc62eb74f6e658e1f66ae8cab1/volumes"
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.921055    1622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-671000" podStartSLOduration=0.921039556 podStartE2EDuration="921.039556ms" podCreationTimestamp="2024-10-14 15:46:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-14 15:46:15.920643156 +0000 UTC m=+6.311915162" watchObservedRunningTime="2024-10-14 15:46:15.921039556 +0000 UTC m=+6.312311562"
	I1014 08:47:31.604443   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 kubelet[1622]: I1014 15:46:15.921516    1622 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="778fdb620bffec66f911bf24e3c8210b" path="/var/lib/kubelet/pods/778fdb620bffec66f911bf24e3c8210b/volumes"
	I1014 08:47:31.604977   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.380142    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:31.605166   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.380233    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:17.380214172 +0000 UTC m=+7.771486078 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:31.605229   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.480798    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.605229   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.480831    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.605304   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.480915    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:17.480897019 +0000 UTC m=+7.872168925 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.605370   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: I1014 15:46:16.655226    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f8bdf552734e59df94d489a081585cdaeeaee729f0a0ae92105117d4d744f3f"
	I1014 08:47:31.605396   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: I1014 15:46:16.670380    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdcdd532ba1369fe0e866b321f18e0bd99a88a2699c325eea17a070d48f2f024"
	I1014 08:47:31.605493   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: I1014 15:46:16.981444    1622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bcadf1f0885ff3411b477a64c6af366c77eb264ce7a761f174b079b11e2e703"
	I1014 08:47:31.605542   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: I1014 15:46:16.981500    1622 kubelet.go:1895] "Trying to delete pod" pod="kube-system/etcd-multinode-671000" podUID="56dfdf16-1224-41e3-94de-9d7f4021a17d"
	I1014 08:47:31.605565   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 kubelet[1622]: E1014 15:46:16.982831    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.605565   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: I1014 15:46:17.011276    1622 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-671000"
	I1014 08:47:31.605623   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.388224    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:31.605673   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.388370    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:19.388351245 +0000 UTC m=+9.779623151 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:31.605673   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.489591    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.605673   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.489649    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.605673   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.489828    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:19.489808492 +0000 UTC m=+9.881080398 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.605673   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 kubelet[1622]: E1014 15:46:17.915482    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.605673   15224 command_runner.go:130] > Oct 14 15:46:18 multinode-671000 kubelet[1622]: I1014 15:46:18.163696    1622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-671000" podStartSLOduration=1.163677409 podStartE2EDuration="1.163677409s" podCreationTimestamp="2024-10-14 15:46:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-14 15:46:18.133766095 +0000 UTC m=+8.525038101" watchObservedRunningTime="2024-10-14 15:46:18.163677409 +0000 UTC m=+8.554949415"
	I1014 08:47:31.605673   15224 command_runner.go:130] > Oct 14 15:46:18 multinode-671000 kubelet[1622]: E1014 15:46:18.908674    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.605673   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.405477    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:31.605673   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.405614    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:23.405594191 +0000 UTC m=+13.796866097 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:31.605673   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.506858    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.605673   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.507035    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.605673   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.507122    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:23.507105839 +0000 UTC m=+13.898377845 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.605673   15224 command_runner.go:130] > Oct 14 15:46:19 multinode-671000 kubelet[1622]: E1014 15:46:19.931507    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.605673   15224 command_runner.go:130] > Oct 14 15:46:20 multinode-671000 kubelet[1622]: E1014 15:46:20.907760    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.605673   15224 command_runner.go:130] > Oct 14 15:46:21 multinode-671000 kubelet[1622]: E1014 15:46:21.908994    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.606297   15224 command_runner.go:130] > Oct 14 15:46:22 multinode-671000 kubelet[1622]: E1014 15:46:22.908657    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.606349   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.462111    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:31.606466   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.462203    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:31.462185592 +0000 UTC m=+21.853457598 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:31.606466   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.562508    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.606466   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.562563    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.606466   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.562768    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:31.562650785 +0000 UTC m=+21.953922691 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.606466   15224 command_runner.go:130] > Oct 14 15:46:23 multinode-671000 kubelet[1622]: E1014 15:46:23.910119    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.606466   15224 command_runner.go:130] > Oct 14 15:46:24 multinode-671000 kubelet[1622]: E1014 15:46:24.908917    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.606466   15224 command_runner.go:130] > Oct 14 15:46:25 multinode-671000 kubelet[1622]: E1014 15:46:25.909505    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.606466   15224 command_runner.go:130] > Oct 14 15:46:26 multinode-671000 kubelet[1622]: E1014 15:46:26.907750    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.607068   15224 command_runner.go:130] > Oct 14 15:46:27 multinode-671000 kubelet[1622]: E1014 15:46:27.908822    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.607068   15224 command_runner.go:130] > Oct 14 15:46:28 multinode-671000 kubelet[1622]: E1014 15:46:28.908219    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.607068   15224 command_runner.go:130] > Oct 14 15:46:29 multinode-671000 kubelet[1622]: E1014 15:46:29.910218    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.607068   15224 command_runner.go:130] > Oct 14 15:46:30 multinode-671000 kubelet[1622]: E1014 15:46:30.908259    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.607068   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.541520    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:31.607068   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.541653    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:46:47.541634578 +0000 UTC m=+37.932906484 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:31.607068   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.641930    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.607068   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.641961    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.607068   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.642009    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:46:47.641990935 +0000 UTC m=+38.033262841 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.607068   15224 command_runner.go:130] > Oct 14 15:46:31 multinode-671000 kubelet[1622]: E1014 15:46:31.908383    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.607068   15224 command_runner.go:130] > Oct 14 15:46:32 multinode-671000 kubelet[1622]: E1014 15:46:32.908527    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.607068   15224 command_runner.go:130] > Oct 14 15:46:33 multinode-671000 kubelet[1622]: E1014 15:46:33.910838    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.607614   15224 command_runner.go:130] > Oct 14 15:46:34 multinode-671000 kubelet[1622]: E1014 15:46:34.908180    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.607657   15224 command_runner.go:130] > Oct 14 15:46:35 multinode-671000 kubelet[1622]: E1014 15:46:35.908574    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.607657   15224 command_runner.go:130] > Oct 14 15:46:36 multinode-671000 kubelet[1622]: E1014 15:46:36.907722    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.607796   15224 command_runner.go:130] > Oct 14 15:46:37 multinode-671000 kubelet[1622]: E1014 15:46:37.907861    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.607876   15224 command_runner.go:130] > Oct 14 15:46:38 multinode-671000 kubelet[1622]: E1014 15:46:38.908728    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.607876   15224 command_runner.go:130] > Oct 14 15:46:39 multinode-671000 kubelet[1622]: E1014 15:46:39.908994    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.607959   15224 command_runner.go:130] > Oct 14 15:46:40 multinode-671000 kubelet[1622]: E1014 15:46:40.908676    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.608010   15224 command_runner.go:130] > Oct 14 15:46:41 multinode-671000 kubelet[1622]: E1014 15:46:41.909525    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.608028   15224 command_runner.go:130] > Oct 14 15:46:42 multinode-671000 kubelet[1622]: E1014 15:46:42.908679    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.608089   15224 command_runner.go:130] > Oct 14 15:46:43 multinode-671000 kubelet[1622]: E1014 15:46:43.908615    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.608147   15224 command_runner.go:130] > Oct 14 15:46:44 multinode-671000 kubelet[1622]: E1014 15:46:44.908884    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.608182   15224 command_runner.go:130] > Oct 14 15:46:45 multinode-671000 kubelet[1622]: E1014 15:46:45.908370    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.608254   15224 command_runner.go:130] > Oct 14 15:46:46 multinode-671000 kubelet[1622]: E1014 15:46:46.909263    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.608281   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.573240    1622 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I1014 08:47:31.608281   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.573353    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume podName:fd736862-9e3e-4a3d-9a86-08efd2338477 nodeName:}" failed. No retries permitted until 2024-10-14 15:47:19.573334644 +0000 UTC m=+69.964606650 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fd736862-9e3e-4a3d-9a86-08efd2338477-config-volume") pod "coredns-7c65d6cfc9-fs9ct" (UID: "fd736862-9e3e-4a3d-9a86-08efd2338477") : object "kube-system"/"coredns" not registered
	I1014 08:47:31.608281   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.673810    1622 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.608281   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.673907    1622 projected.go:194] Error preparing data for projected volume kube-api-access-46k9l for pod default/busybox-7dff88458-vlp7j: object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.608281   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.674014    1622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l podName:99068807-9f92-42f1-a1a0-fb6e533dc61a nodeName:}" failed. No retries permitted until 2024-10-14 15:47:19.673994259 +0000 UTC m=+70.065266165 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-46k9l" (UniqueName: "kubernetes.io/projected/99068807-9f92-42f1-a1a0-fb6e533dc61a-kube-api-access-46k9l") pod "busybox-7dff88458-vlp7j" (UID: "99068807-9f92-42f1-a1a0-fb6e533dc61a") : object "default"/"kube-root-ca.crt" not registered
	I1014 08:47:31.608281   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 kubelet[1622]: E1014 15:46:47.908883    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.608281   15224 command_runner.go:130] > Oct 14 15:46:48 multinode-671000 kubelet[1622]: I1014 15:46:48.486803    1622 scope.go:117] "RemoveContainer" containerID="3d8b7bae48a59c755a1ffda14e7fdd0c2302b394db67b7de21fd5b819dad243b"
	I1014 08:47:31.608281   15224 command_runner.go:130] > Oct 14 15:46:48 multinode-671000 kubelet[1622]: I1014 15:46:48.487259    1622 scope.go:117] "RemoveContainer" containerID="c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6"
	I1014 08:47:31.608281   15224 command_runner.go:130] > Oct 14 15:46:48 multinode-671000 kubelet[1622]: E1014 15:46:48.487448    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(fde8ff75-bc7f-4db4-b098-c3a08b38d205)\"" pod="kube-system/storage-provisioner" podUID="fde8ff75-bc7f-4db4-b098-c3a08b38d205"
	I1014 08:47:31.608281   15224 command_runner.go:130] > Oct 14 15:46:48 multinode-671000 kubelet[1622]: E1014 15:46:48.908732    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.608281   15224 command_runner.go:130] > Oct 14 15:46:49 multinode-671000 kubelet[1622]: E1014 15:46:49.908877    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.608839   15224 command_runner.go:130] > Oct 14 15:46:50 multinode-671000 kubelet[1622]: E1014 15:46:50.907718    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.608919   15224 command_runner.go:130] > Oct 14 15:46:51 multinode-671000 kubelet[1622]: E1014 15:46:51.909552    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.608919   15224 command_runner.go:130] > Oct 14 15:46:52 multinode-671000 kubelet[1622]: E1014 15:46:52.908818    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.609126   15224 command_runner.go:130] > Oct 14 15:46:53 multinode-671000 kubelet[1622]: E1014 15:46:53.908389    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.609126   15224 command_runner.go:130] > Oct 14 15:46:54 multinode-671000 kubelet[1622]: E1014 15:46:54.908089    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.609250   15224 command_runner.go:130] > Oct 14 15:46:55 multinode-671000 kubelet[1622]: E1014 15:46:55.908582    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.609250   15224 command_runner.go:130] > Oct 14 15:46:56 multinode-671000 kubelet[1622]: E1014 15:46:56.908839    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.609250   15224 command_runner.go:130] > Oct 14 15:46:57 multinode-671000 kubelet[1622]: E1014 15:46:57.909489    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-7dff88458-vlp7j" podUID="99068807-9f92-42f1-a1a0-fb6e533dc61a"
	I1014 08:47:31.609250   15224 command_runner.go:130] > Oct 14 15:46:58 multinode-671000 kubelet[1622]: E1014 15:46:58.908804    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	I1014 08:47:31.609250   15224 command_runner.go:130] > Oct 14 15:46:59 multinode-671000 kubelet[1622]: I1014 15:46:59.853068    1622 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
	I1014 08:47:31.609250   15224 command_runner.go:130] > Oct 14 15:47:02 multinode-671000 kubelet[1622]: I1014 15:47:02.908981    1622 scope.go:117] "RemoveContainer" containerID="c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6"
	I1014 08:47:31.609250   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]: I1014 15:47:09.901385    1622 scope.go:117] "RemoveContainer" containerID="0b5a6e440d7b67606ed0a4dfa4d07715b1fd7e6f53bc0b8779f86a33c5baf6e9"
	I1014 08:47:31.609250   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]: I1014 15:47:09.946936    1622 scope.go:117] "RemoveContainer" containerID="1ba3cd8bbd5963097f4d674fc98eca21e1a710f5a150a067747aa4e6c922d2fe"
	I1014 08:47:31.609250   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]: E1014 15:47:09.949713    1622 iptables.go:577] "Could not set up iptables canary" err=<
	I1014 08:47:31.609250   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I1014 08:47:31.609250   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I1014 08:47:31.609250   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I1014 08:47:31.609250   15224 command_runner.go:130] > Oct 14 15:47:09 multinode-671000 kubelet[1622]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I1014 08:47:31.651929   15224 logs.go:123] Gathering logs for coredns [5d223e2e64fc] ...
	I1014 08:47:31.651929   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5d223e2e64fc"
	I1014 08:47:31.683801   15224 command_runner.go:130] > .:53
	I1014 08:47:31.683896   15224 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	I1014 08:47:31.683896   15224 command_runner.go:130] > CoreDNS-1.11.3
	I1014 08:47:31.683896   15224 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I1014 08:47:31.683896   15224 command_runner.go:130] > [INFO] 127.0.0.1:42996 - 9104 "HINFO IN 5434967794797104596.5472118418078127170. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.148386647s
	I1014 08:47:31.684185   15224 logs.go:123] Gathering logs for kube-controller-manager [712aad669c9f] ...
	I1014 08:47:31.684185   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 712aad669c9f"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:34.276457       1 serving.go:386] Generated self-signed cert in-memory
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:34.721812       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:34.722099       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:34.724748       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:34.725085       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:34.725754       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:34.725985       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.207411       1 controllermanager.go:797] "Started controller" controller="serviceaccount-token-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.208026       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.207651       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.210064       1 shared_informer.go:320] Caches are synced for tokens
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.224528       1 controllermanager.go:797] "Started controller" controller="taint-eviction-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.224966       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.225213       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.226734       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.238395       1 controllermanager.go:797] "Started controller" controller="pod-garbage-collector-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.238610       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.239186       1 shared_informer.go:313] Waiting for caches to sync for GC
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.257957       1 controllermanager.go:797] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.258113       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.264110       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.291746       1 controllermanager.go:797] "Started controller" controller="token-cleaner-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.291968       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.292012       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.292035       1 shared_informer.go:320] Caches are synced for token_cleaner
	I1014 08:47:31.713967   15224 command_runner.go:130] ! E1014 15:22:39.298368       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.298490       1 controllermanager.go:775] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.320068       1 controllermanager.go:797] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.321579       1 pvc_protection_controller.go:105] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.322507       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.334562       1 controllermanager.go:797] "Started controller" controller="endpointslice-mirroring-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.335065       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.335174       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.357454       1 controllermanager.go:797] "Started controller" controller="serviceaccount-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.357636       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.357669       1 shared_informer.go:313] Waiting for caches to sync for service account
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.377687       1 controllermanager.go:797] "Started controller" controller="daemonset-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.378056       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.378087       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I1014 08:47:31.713967   15224 command_runner.go:130] ! I1014 15:22:39.416186       1 controllermanager.go:797] "Started controller" controller="disruption-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.416643       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.417022       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.417371       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.469032       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.469507       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.469770       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.470779       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-signing-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.470793       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.471453       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.470805       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.470829       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.471957       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.470841       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.470861       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.472955       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.470870       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.621859       1 controllermanager.go:797] "Started controller" controller="persistentvolume-binder-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.622638       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.623052       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.777984       1 controllermanager.go:797] "Started controller" controller="root-ca-certificate-publisher-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.778063       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.778141       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.918879       1 controllermanager.go:797] "Started controller" controller="replicationcontroller-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.919046       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:39.919060       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.166453       1 controllermanager.go:797] "Started controller" controller="garbage-collector-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.167822       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.168483       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.168745       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.423412       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.423795       1 controllermanager.go:797] "Started controller" controller="node-ipam-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.424239       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.424496       1 controllermanager.go:775] "Warning: skipping controller" controller="node-route-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.424173       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.425286       1 shared_informer.go:313] Waiting for caches to sync for node
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.570482       1 controllermanager.go:797] "Started controller" controller="persistentvolume-expander-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.570669       1 expand_controller.go:328] "Starting expand controller" logger="persistentvolume-expander-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.570684       1 shared_informer.go:313] Waiting for caches to sync for expand
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.718742       1 controllermanager.go:797] "Started controller" controller="ttl-after-finished-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.718766       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.718828       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.718839       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.875244       1 controllermanager.go:797] "Started controller" controller="endpointslice-controller"
	I1014 08:47:31.714989   15224 command_runner.go:130] ! I1014 15:22:40.875390       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:40.875405       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.022254       1 controllermanager.go:797] "Started controller" controller="replicaset-controller"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.023099       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.023161       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.176342       1 controllermanager.go:797] "Started controller" controller="statefulset-controller"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.176460       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.176471       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.319171       1 controllermanager.go:797] "Started controller" controller="persistentvolume-protection-controller"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.319300       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.319332       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.469263       1 controllermanager.go:797] "Started controller" controller="clusterrole-aggregation-controller"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.469488       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.470311       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.618471       1 controllermanager.go:797] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.618507       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.619582       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.813364       1 controllermanager.go:797] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:41.813412       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.123997       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.124656       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.125147       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.125502       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.125684       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.125715       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.125765       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.125789       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.125821       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.125850       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.125919       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.125938       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! W1014 15:22:42.125970       1 shared_informer.go:597] resyncPeriod 22h30m25.60471532s is smaller than resyncCheckPeriod 23h6m49.635307948s and the informer has already started. Changing it to 23h6m49.635307948s
	I1014 08:47:31.715966   15224 command_runner.go:130] ! W1014 15:22:42.126028       1 shared_informer.go:597] resyncPeriod 22h40m57.132720005s is smaller than resyncCheckPeriod 23h6m49.635307948s and the informer has already started. Changing it to 23h6m49.635307948s
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.126215       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.126353       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.126435       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.126461       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.126498       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.126514       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.126546       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.126572       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.126591       1 controllermanager.go:797] "Started controller" controller="resourcequota-controller"
	I1014 08:47:31.715966   15224 command_runner.go:130] ! I1014 15:22:42.127139       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.127191       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.127239       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.377410       1 controllermanager.go:797] "Started controller" controller="namespace-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.378109       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.378533       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.520088       1 controllermanager.go:797] "Started controller" controller="job-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.520194       1 job_controller.go:226] "Starting job controller" logger="job-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.520661       1 shared_informer.go:313] Waiting for caches to sync for job
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.669141       1 controllermanager.go:797] "Started controller" controller="ttl-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.669227       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.669239       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.713738       1 node_lifecycle_controller.go:430] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.713795       1 controllermanager.go:797] "Started controller" controller="node-lifecycle-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.713972       1 node_lifecycle_controller.go:464] "Sending events to api server" logger="node-lifecycle-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.714019       1 node_lifecycle_controller.go:475] "Starting node controller" logger="node-lifecycle-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.714028       1 shared_informer.go:313] Waiting for caches to sync for taint
	I1014 08:47:31.716999   15224 command_runner.go:130] ! E1014 15:22:42.870353       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:42.870400       1 controllermanager.go:775] "Warning: skipping controller" controller="service-lb-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.022018       1 controllermanager.go:797] "Started controller" controller="persistentvolume-attach-detach-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.022670       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.022756       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.169053       1 controllermanager.go:797] "Started controller" controller="ephemeral-volume-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.169165       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.169572       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.319453       1 controllermanager.go:797] "Started controller" controller="endpoints-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.319620       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.319648       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.471065       1 controllermanager.go:797] "Started controller" controller="deployment-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.471807       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.472102       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.621382       1 controllermanager.go:797] "Started controller" controller="cronjob-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.621522       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.621537       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.663267       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-approving-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.663415       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.663427       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.822946       1 controllermanager.go:797] "Started controller" controller="bootstrap-signer-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.822992       1 controllermanager.go:775] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.823061       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.863507       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.863638       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.863659       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.902554       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.913563       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.916687       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000\" does not exist"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.921355       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.921578       1 shared_informer.go:320] Caches are synced for cronjob
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.921709       1 shared_informer.go:320] Caches are synced for job
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.921822       1 shared_informer.go:320] Caches are synced for PV protection
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.922806       1 shared_informer.go:320] Caches are synced for PVC protection
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.922814       1 shared_informer.go:320] Caches are synced for TTL after finished
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.924127       1 shared_informer.go:320] Caches are synced for persistent volume
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.924751       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.925596       1 shared_informer.go:320] Caches are synced for node
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.925653       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1014 08:47:31.716999   15224 command_runner.go:130] ! I1014 15:22:43.925863       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.925961       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.925971       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.927918       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.933656       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.935993       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.939827       1 shared_informer.go:320] Caches are synced for GC
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.945652       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-671000" podCIDRs=["10.244.0.0/24"]
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.945733       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.946434       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.958217       1 shared_informer.go:320] Caches are synced for service account
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.964566       1 shared_informer.go:320] Caches are synced for HPA
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.970909       1 shared_informer.go:320] Caches are synced for TTL
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.971119       1 shared_informer.go:320] Caches are synced for ephemeral
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.971337       1 shared_informer.go:320] Caches are synced for expand
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.975501       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.976796       1 shared_informer.go:320] Caches are synced for stateful set
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.978344       1 shared_informer.go:320] Caches are synced for crt configmap
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.978435       1 shared_informer.go:320] Caches are synced for daemon sets
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:43.980084       1 shared_informer.go:320] Caches are synced for namespace
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.014728       1 shared_informer.go:320] Caches are synced for taint
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.015046       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.015932       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.016156       1 node_lifecycle_controller.go:1036] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.020094       1 shared_informer.go:320] Caches are synced for endpoint
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.020640       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.071958       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.103447       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.118642       1 shared_informer.go:320] Caches are synced for disruption
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.123565       1 shared_informer.go:320] Caches are synced for attach detach
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.124082       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.128052       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.164601       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.170410       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.172085       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.172168       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.172762       1 shared_informer.go:320] Caches are synced for deployment
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.173998       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.583260       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.634360       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.669630       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:44.669841       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:45.450540       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="308.738304ms"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:45.524372       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="73.173482ms"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:45.524478       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="65.397µs"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:46.000395       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="74.724912ms"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:46.017930       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="16.329807ms"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:22:46.018255       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="275.988µs"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:23:06.558708       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:23:06.579629       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:23:06.601705       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="101.399µs"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:23:06.643522       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="58.099µs"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:23:08.868021       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="148.904µs"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:23:08.936155       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="33.695698ms"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:23:08.939220       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="3.012072ms"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:23:09.023157       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:23:10.921399       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:25:49.920125       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m02\" does not exist"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:25:49.955308       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-671000-m02" podCIDRs=["10.244.1.0/24"]
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:25:49.956041       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:25:49.956493       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:25:50.332394       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:25:50.885049       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:25:54.059204       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000-m02"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:25:54.342262       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:26:00.157293       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:26:18.720546       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:26:18.720611       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:26:18.738467       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:26:19.084143       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.717954   15224 command_runner.go:130] ! I1014 15:26:20.411603       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:26:44.435156       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="75.721873ms"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:26:44.496244       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="60.852418ms"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:26:44.496945       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="131.501µs"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:26:44.540742       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="117.6µs"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:26:47.680512       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.465591ms"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:26:47.680616       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="27.8µs"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:26:47.878633       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="13.308091ms"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:26:47.878779       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="30.7µs"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:26:50.724728       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:27:15.823577       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:30:35.115559       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:30:35.116078       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m03\" does not exist"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:30:35.128392       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-671000-m03" podCIDRs=["10.244.2.0/24"]
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:30:35.128677       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:30:35.128924       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:30:35.152829       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:30:35.373296       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:30:35.920577       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:30:39.132287       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:30:39.151825       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:30:45.490553       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:31:04.306000       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:31:04.306453       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:31:04.323636       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:31:05.841789       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:31:09.153752       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:31:56.911043       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:32:21.316935       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:36:11.719246       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:37:02.446841       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:37:26.676097       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:38:59.261991       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:38:59.262728       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:38:59.286871       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:39:04.424423       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:24.025444       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:24.063975       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:29.184402       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:29.185577       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:34.952323       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m03\" does not exist"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:34.952330       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:34.966125       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-671000-m03" podCIDRs=["10.244.3.0/24"]
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:34.966148       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:34.966505       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:34.987165       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:35.003234       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:35.540526       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:39.448073       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:45.343875       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:53.719761       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:53.720945       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:53.741507       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:41:54.369330       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:42:08.557249       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:42:32.770970       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:43:29.631595       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:43:29.632207       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:43:29.853526       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:43:35.163131       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:43:45.119758       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:43:45.151031       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:43:45.251625       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.269341ms"
	I1014 08:47:31.718955   15224 command_runner.go:130] ! I1014 15:43:45.252472       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="121.1µs"
	I1014 08:47:31.742011   15224 logs.go:123] Gathering logs for kindnet [bba035362eb9] ...
	I1014 08:47:31.742011   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bba035362eb9"
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:46:18.000845       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:46:18.015386       1 main.go:139] hostIP = 172.20.106.123
	I1014 08:47:31.772870   15224 command_runner.go:130] ! podIP = 172.20.106.123
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:46:18.015613       1 main.go:148] setting mtu 1500 for CNI 
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:46:18.015630       1 main.go:178] kindnetd IP family: "ipv4"
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:46:18.015641       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:46:18.919987       1 main.go:238] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-37: Error: Could not process rule: Operation not supported
	I1014 08:47:31.772870   15224 command_runner.go:130] ! add table inet kube-network-policies
	I1014 08:47:31.772870   15224 command_runner.go:130] ! ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	I1014 08:47:31.772870   15224 command_runner.go:130] ! , skipping network policies
	I1014 08:47:31.772870   15224 command_runner.go:130] ! W1014 15:46:48.934772       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1014 08:47:31.772870   15224 command_runner.go:130] ! E1014 15:46:48.935157       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError"
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:46:58.925780       1 main.go:296] Handling node with IPs: map[172.20.106.123:{}]
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:46:58.926393       1 main.go:300] handling current node
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:46:58.927562       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:46:58.927665       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:46:58.928645       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.20.109.137 Flags: [] Table: 0 Realm: 0} 
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:46:58.929412       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:46:58.929466       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:46:58.929555       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.20.102.29 Flags: [] Table: 0 Realm: 0} 
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:47:08.930440       1 main.go:296] Handling node with IPs: map[172.20.106.123:{}]
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:47:08.930586       1 main.go:300] handling current node
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:47:08.930648       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:47:08.930739       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:47:08.931080       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:47:08.931268       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:47:18.921538       1 main.go:296] Handling node with IPs: map[172.20.106.123:{}]
	I1014 08:47:31.772870   15224 command_runner.go:130] ! I1014 15:47:18.921639       1 main.go:300] handling current node
	I1014 08:47:31.773398   15224 command_runner.go:130] ! I1014 15:47:18.921689       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:31.773398   15224 command_runner.go:130] ! I1014 15:47:18.921698       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:31.773398   15224 command_runner.go:130] ! I1014 15:47:18.922117       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:31.773462   15224 command_runner.go:130] ! I1014 15:47:18.922190       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:31.773462   15224 command_runner.go:130] ! I1014 15:47:28.925595       1 main.go:296] Handling node with IPs: map[172.20.106.123:{}]
	I1014 08:47:31.773462   15224 command_runner.go:130] ! I1014 15:47:28.925732       1 main.go:300] handling current node
	I1014 08:47:31.773462   15224 command_runner.go:130] ! I1014 15:47:28.925759       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:31.773462   15224 command_runner.go:130] ! I1014 15:47:28.925767       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:31.773542   15224 command_runner.go:130] ! I1014 15:47:28.926918       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:31.773569   15224 command_runner.go:130] ! I1014 15:47:28.927018       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:31.774433   15224 logs.go:123] Gathering logs for Docker ...
	I1014 08:47:31.774433   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:44 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:44 minikube cri-dockerd[221]: time="2024-10-14T15:44:44Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:44 minikube cri-dockerd[221]: time="2024-10-14T15:44:44Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:44 minikube cri-dockerd[221]: time="2024-10-14T15:44:44Z" level=info msg="Start docker client with request timeout 0s"
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:44 minikube cri-dockerd[221]: time="2024-10-14T15:44:44Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:45 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:45 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:45 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:47 minikube cri-dockerd[413]: time="2024-10-14T15:44:47Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:47 minikube cri-dockerd[413]: time="2024-10-14T15:44:47Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:47 minikube cri-dockerd[413]: time="2024-10-14T15:44:47Z" level=info msg="Start docker client with request timeout 0s"
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:47 minikube cri-dockerd[413]: time="2024-10-14T15:44:47Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:47 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:49 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:49 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:49 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:50 minikube cri-dockerd[421]: time="2024-10-14T15:44:50Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:50 minikube cri-dockerd[421]: time="2024-10-14T15:44:50Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:50 minikube cri-dockerd[421]: time="2024-10-14T15:44:50Z" level=info msg="Start docker client with request timeout 0s"
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:50 minikube cri-dockerd[421]: time="2024-10-14T15:44:50Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:50 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:50 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:50 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:52 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:52 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:52 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:52 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:44:52 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:45:33 multinode-671000 systemd[1]: Starting Docker Application Container Engine...
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:45:33 multinode-671000 dockerd[649]: time="2024-10-14T15:45:33.956984837Z" level=info msg="Starting up"
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:45:33 multinode-671000 dockerd[649]: time="2024-10-14T15:45:33.957924243Z" level=info msg="containerd not running, starting managed containerd"
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:45:33 multinode-671000 dockerd[649]: time="2024-10-14T15:45:33.959335951Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=655
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:45:33 multinode-671000 dockerd[655]: time="2024-10-14T15:45:33.994773864Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.020772213Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I1014 08:47:31.809451   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.021015015Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.021095615Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.021147816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.021828519Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.021976120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.022248222Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.022376622Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.022401523Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.022414623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.023030126Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.023715230Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.027058949Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.027212250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.027346050Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.027434351Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.028070055Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.028254556Z" level=info msg="metadata content store policy set" policy=shared
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.033722086Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.033900187Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.033927888Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.033944088Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.033959488Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.034029088Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.034638992Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.034898493Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.034993394Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035025394Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035042394Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035056394Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035070894Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035091294Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035125794Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035139394Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035152195Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035200795Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035227495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035242395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035255095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035268595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035283595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035296895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035314495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035330096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.810430   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035343596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035364096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035376796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035388896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035401196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035419096Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035441896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035454496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035465896Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035512897Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035554297Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035568497Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035580597Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035590797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035604297Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035619397Z" level=info msg="NRI interface is disabled by configuration."
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.035934999Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.036229901Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.036295501Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:34 multinode-671000 dockerd[655]: time="2024-10-14T15:45:34.036322201Z" level=info msg="containerd successfully booted in 0.043787s"
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.016752326Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.204043816Z" level=info msg="Loading containers: start."
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.545951324Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.688138626Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.780023455Z" level=info msg="Loading containers: done."
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.809569125Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.809610125Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.809633825Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.810490930Z" level=info msg="Daemon has completed initialization"
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.853736479Z" level=info msg="API listen on /var/run/docker.sock"
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 dockerd[649]: time="2024-10-14T15:45:35.854139881Z" level=info msg="API listen on [::]:2376"
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:45:35 multinode-671000 systemd[1]: Started Docker Application Container Engine.
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 systemd[1]: Stopping Docker Application Container Engine...
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 dockerd[649]: time="2024-10-14T15:46:01.049459779Z" level=info msg="Processing signal 'terminated'"
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 dockerd[649]: time="2024-10-14T15:46:01.053392981Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 dockerd[649]: time="2024-10-14T15:46:01.053568081Z" level=info msg="Daemon shutdown complete"
	I1014 08:47:31.811434   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 dockerd[649]: time="2024-10-14T15:46:01.053889681Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:01 multinode-671000 dockerd[649]: time="2024-10-14T15:46:01.054172781Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 systemd[1]: docker.service: Deactivated successfully.
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 systemd[1]: Stopped Docker Application Container Engine.
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 systemd[1]: Starting Docker Application Container Engine...
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:02.109177376Z" level=info msg="Starting up"
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:02.110667577Z" level=info msg="containerd not running, starting managed containerd"
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:02.112008177Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1093
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.143199292Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168149004Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168197704Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168231304Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168244704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168266504Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168317904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168445004Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168531404Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168550204Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168561104Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168583904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.168690904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.171907506Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172002906Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172175606Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172377606Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172424606Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172461506Z" level=info msg="metadata content store policy set" policy=shared
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172795106Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172882406Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172902406Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172916306Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172930506Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.172992206Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173380806Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173626906Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173734806Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173758306Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173794906Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173849506Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173864606Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.812439   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173878206Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173900507Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173916207Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173928607Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173940507Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173959407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173973007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173985207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.173998307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174010307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174023407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174035407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174047207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174077107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174095807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174107607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174191507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174206607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174229207Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174259307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174352207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174370407Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174499607Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174541907Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174556007Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174568207Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174578207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174598407Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174612107Z" level=info msg="NRI interface is disabled by configuration."
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.174893107Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.175192307Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.175271607Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:02 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:02.175364007Z" level=info msg="containerd successfully booted in 0.032943s"
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.157176768Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.188626383Z" level=info msg="Loading containers: start."
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.419822091Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.533275144Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.631380390Z" level=info msg="Loading containers: done."
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.656005002Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.656245502Z" level=info msg="Daemon has completed initialization"
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.695426820Z" level=info msg="API listen on /var/run/docker.sock"
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 systemd[1]: Started Docker Application Container Engine.
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:03 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:03.695638120Z" level=info msg="API listen on [::]:2376"
	I1014 08:47:31.813437   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Start docker client with request timeout 0s"
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Loaded network plugin cni"
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Docker cri networking managed by network plugin cni"
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Setting cgroupDriver cgroupfs"
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:04Z" level=info msg="Start cri-dockerd grpc backend"
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:04 multinode-671000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:09Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7c65d6cfc9-fs9ct_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"2f8cc9a218fef4446839b0253827fc2dd89f8eccbd66ab7f0e5123c033f0aacc\""
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:09 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:09Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-7dff88458-vlp7j_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"06e529266db4b2cf3ad09285cddeb89d7d11e858b5156cf73f41198c9500b9f2\""
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.635579177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.635817077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.635919877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.636083677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.762883836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.763092036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.763114536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.765440937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1d3033f871fb11cb3095bcf5c5d43615de9685372a45edf226fe52b2f482bc71/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.846488476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.847106376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.847254676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.854373579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.883112593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.884477393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.884514293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:10.884605993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:10 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6155e8be2d5d725a4259a45fe10f7ceb3fc581d528a6486633b563a59f331127/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7bd4c36606eefc91e9ae07ea5683536fc78fdb6f7f752f44d28787b88540a878/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.061102976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.061201476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.061221876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.061393176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0697a11790e806fa4d679e3d97be4c7193692d1cb8f76d882cf3a75aa8e0c238/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.312465294Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.814446   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.312610794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.312646494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.312762794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.422697746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.422797746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.422816346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.423001046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.500801282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.501016583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.501037383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:11 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:11.504117984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:15 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:15Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.472267615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.472571215Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.472597215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.472873315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.475833517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.476013017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.476107917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.476358717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.515050835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.515249635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.515393835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.515565835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6f8bdf552734e59df94d489a081585cdaeeaee729f0a0ae92105117d4d744f3f/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cdcdd532ba1369fe0e866b321f18e0bd99a88a2699c325eea17a070d48f2f024/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.911588321Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.913177522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.913368522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:16.914060722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:16 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:46:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7bcadf1f0885ff3411b477a64c6af366c77eb264ce7a761f174b079b11e2e703/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.063841193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.063929693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.063946093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.064242693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.206735160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.207544260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.815485   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.208633061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:46:17 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:17.224429668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 dockerd[1087]: time="2024-10-14T15:46:47.556424473Z" level=info msg="ignoring event" container=c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:47.559859508Z" level=info msg="shim disconnected" id=c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6 namespace=moby
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:47.560270512Z" level=warning msg="cleaning up after shim disconnected" id=c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6 namespace=moby
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:46:47 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:47.560505714Z" level=info msg="cleaning up dead shim" namespace=moby
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:03 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:03.070959923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:03 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:03.071176624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:03 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:03.071240924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:03 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:03.071756926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.071716036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.071943436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.071968036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.072116937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:47:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/429b989a1a986d23a2e5aee0de1aef1e683a014bebb587981622bd80a3ac5221/resolv.conf as [nameserver 172.20.96.1]"
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.295865797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.295993998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.296019698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.296117898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:47:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9092d17516eb35243fd461a360605e738727838ee50f870f3bd6c290fd061d20/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.536751498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.537062099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.537100499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.537246499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.821494873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.821592273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.821611273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.816501   15224 command_runner.go:130] > Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.821730874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I1014 08:47:31.844459   15224 logs.go:123] Gathering logs for dmesg ...
	I1014 08:47:31.844459   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 08:47:31.867551   15224 command_runner.go:130] > [Oct14 15:44] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I1014 08:47:31.867648   15224 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I1014 08:47:31.867648   15224 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I1014 08:47:31.867648   15224 command_runner.go:130] > [  +0.121183] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I1014 08:47:31.867648   15224 command_runner.go:130] > [  +0.024192] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I1014 08:47:31.867648   15224 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I1014 08:47:31.867749   15224 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I1014 08:47:31.867749   15224 command_runner.go:130] > [  +0.058588] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I1014 08:47:31.867816   15224 command_runner.go:130] > [  +0.021951] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I1014 08:47:31.867858   15224 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I1014 08:47:31.867858   15224 command_runner.go:130] > [  +5.764502] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I1014 08:47:31.867921   15224 command_runner.go:130] > [  +0.701221] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I1014 08:47:31.867921   15224 command_runner.go:130] > [  +1.823727] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I1014 08:47:31.867955   15224 command_runner.go:130] > [  +7.351082] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I1014 08:47:31.867955   15224 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I1014 08:47:31.867955   15224 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I1014 08:47:31.867955   15224 command_runner.go:130] > [Oct14 15:45] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	I1014 08:47:31.867955   15224 command_runner.go:130] > [  +0.175163] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	I1014 08:47:31.868054   15224 command_runner.go:130] > [ +26.061812] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	I1014 08:47:31.868079   15224 command_runner.go:130] > [  +0.098944] kauditd_printk_skb: 71 callbacks suppressed
	I1014 08:47:31.868079   15224 command_runner.go:130] > [  +0.531295] systemd-fstab-generator[1053]: Ignoring "noauto" option for root device
	I1014 08:47:31.868079   15224 command_runner.go:130] > [Oct14 15:46] systemd-fstab-generator[1065]: Ignoring "noauto" option for root device
	I1014 08:47:31.868079   15224 command_runner.go:130] > [  +0.229472] systemd-fstab-generator[1079]: Ignoring "noauto" option for root device
	I1014 08:47:31.868079   15224 command_runner.go:130] > [  +2.943333] systemd-fstab-generator[1310]: Ignoring "noauto" option for root device
	I1014 08:47:31.868079   15224 command_runner.go:130] > [  +0.192845] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	I1014 08:47:31.868079   15224 command_runner.go:130] > [  +0.209914] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	I1014 08:47:31.868079   15224 command_runner.go:130] > [  +0.290916] systemd-fstab-generator[1348]: Ignoring "noauto" option for root device
	I1014 08:47:31.868079   15224 command_runner.go:130] > [  +0.928050] systemd-fstab-generator[1473]: Ignoring "noauto" option for root device
	I1014 08:47:31.868079   15224 command_runner.go:130] > [  +0.103044] kauditd_printk_skb: 202 callbacks suppressed
	I1014 08:47:31.868079   15224 command_runner.go:130] > [  +3.884891] systemd-fstab-generator[1614]: Ignoring "noauto" option for root device
	I1014 08:47:31.868079   15224 command_runner.go:130] > [  +1.232270] kauditd_printk_skb: 44 callbacks suppressed
	I1014 08:47:31.868079   15224 command_runner.go:130] > [  +5.880292] kauditd_printk_skb: 30 callbacks suppressed
	I1014 08:47:31.868079   15224 command_runner.go:130] > [  +4.216972] systemd-fstab-generator[2460]: Ignoring "noauto" option for root device
	I1014 08:47:31.868079   15224 command_runner.go:130] > [ +15.813728] kauditd_printk_skb: 72 callbacks suppressed
	I1014 08:47:31.869550   15224 logs.go:123] Gathering logs for describe nodes ...
	I1014 08:47:31.870127   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 08:47:32.080960   15224 command_runner.go:130] > Name:               multinode-671000
	I1014 08:47:32.080960   15224 command_runner.go:130] > Roles:              control-plane
	I1014 08:47:32.080960   15224 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I1014 08:47:32.080960   15224 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I1014 08:47:32.080960   15224 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I1014 08:47:32.080960   15224 command_runner.go:130] >                     kubernetes.io/hostname=multinode-671000
	I1014 08:47:32.080960   15224 command_runner.go:130] >                     kubernetes.io/os=linux
	I1014 08:47:32.080960   15224 command_runner.go:130] >                     minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	I1014 08:47:32.080960   15224 command_runner.go:130] >                     minikube.k8s.io/name=multinode-671000
	I1014 08:47:32.080960   15224 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I1014 08:47:32.080960   15224 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_10_14T08_22_41_0700
	I1014 08:47:32.080960   15224 command_runner.go:130] >                     minikube.k8s.io/version=v1.34.0
	I1014 08:47:32.080960   15224 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I1014 08:47:32.080960   15224 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I1014 08:47:32.080960   15224 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I1014 08:47:32.080960   15224 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I1014 08:47:32.080960   15224 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I1014 08:47:32.080960   15224 command_runner.go:130] > CreationTimestamp:  Mon, 14 Oct 2024 15:22:36 +0000
	I1014 08:47:32.080960   15224 command_runner.go:130] > Taints:             <none>
	I1014 08:47:32.080960   15224 command_runner.go:130] > Unschedulable:      false
	I1014 08:47:32.080960   15224 command_runner.go:130] > Lease:
	I1014 08:47:32.080960   15224 command_runner.go:130] >   HolderIdentity:  multinode-671000
	I1014 08:47:32.080960   15224 command_runner.go:130] >   AcquireTime:     <unset>
	I1014 08:47:32.080960   15224 command_runner.go:130] >   RenewTime:       Mon, 14 Oct 2024 15:47:26 +0000
	I1014 08:47:32.080960   15224 command_runner.go:130] > Conditions:
	I1014 08:47:32.080960   15224 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I1014 08:47:32.080960   15224 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I1014 08:47:32.080960   15224 command_runner.go:130] >   MemoryPressure   False   Mon, 14 Oct 2024 15:46:59 +0000   Mon, 14 Oct 2024 15:22:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I1014 08:47:32.080960   15224 command_runner.go:130] >   DiskPressure     False   Mon, 14 Oct 2024 15:46:59 +0000   Mon, 14 Oct 2024 15:22:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I1014 08:47:32.080960   15224 command_runner.go:130] >   PIDPressure      False   Mon, 14 Oct 2024 15:46:59 +0000   Mon, 14 Oct 2024 15:22:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I1014 08:47:32.080960   15224 command_runner.go:130] >   Ready            True    Mon, 14 Oct 2024 15:46:59 +0000   Mon, 14 Oct 2024 15:46:59 +0000   KubeletReady                 kubelet is posting ready status
	I1014 08:47:32.080960   15224 command_runner.go:130] > Addresses:
	I1014 08:47:32.080960   15224 command_runner.go:130] >   InternalIP:  172.20.106.123
	I1014 08:47:32.080960   15224 command_runner.go:130] >   Hostname:    multinode-671000
	I1014 08:47:32.080960   15224 command_runner.go:130] > Capacity:
	I1014 08:47:32.080960   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:32.080960   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:32.080960   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:32.080960   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:32.080960   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:32.080960   15224 command_runner.go:130] > Allocatable:
	I1014 08:47:32.080960   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:32.080960   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:32.080960   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:32.080960   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:32.080960   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:32.080960   15224 command_runner.go:130] > System Info:
	I1014 08:47:32.080960   15224 command_runner.go:130] >   Machine ID:                 fc389f3b9e2846b4b909cfc8e7984541
	I1014 08:47:32.081981   15224 command_runner.go:130] >   System UUID:                f72bc210-2ad8-1e4f-ad7d-3e75159c3d98
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Boot ID:                    98d09a99-1eff-402d-837f-6cacdc4463d7
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Kernel Version:             5.10.207
	I1014 08:47:32.081981   15224 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Operating System:           linux
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Architecture:               amd64
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Container Runtime Version:  docker://27.3.1
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Kubelet Version:            v1.31.1
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Kube-Proxy Version:         v1.31.1
	I1014 08:47:32.081981   15224 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I1014 08:47:32.081981   15224 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I1014 08:47:32.081981   15224 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I1014 08:47:32.081981   15224 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I1014 08:47:32.081981   15224 command_runner.go:130] >   default                     busybox-7dff88458-vlp7j                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I1014 08:47:32.081981   15224 command_runner.go:130] >   kube-system                 coredns-7c65d6cfc9-fs9ct                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     24m
	I1014 08:47:32.081981   15224 command_runner.go:130] >   kube-system                 etcd-multinode-671000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         75s
	I1014 08:47:32.081981   15224 command_runner.go:130] >   kube-system                 kindnet-wqrx6                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	I1014 08:47:32.081981   15224 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-671000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         77s
	I1014 08:47:32.081981   15224 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-671000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I1014 08:47:32.081981   15224 command_runner.go:130] >   kube-system                 kube-proxy-r74dx                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I1014 08:47:32.081981   15224 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-671000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I1014 08:47:32.081981   15224 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I1014 08:47:32.081981   15224 command_runner.go:130] > Allocated resources:
	I1014 08:47:32.081981   15224 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Resource           Requests     Limits
	I1014 08:47:32.081981   15224 command_runner.go:130] >   --------           --------     ------
	I1014 08:47:32.081981   15224 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I1014 08:47:32.081981   15224 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I1014 08:47:32.081981   15224 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I1014 08:47:32.081981   15224 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I1014 08:47:32.081981   15224 command_runner.go:130] > Events:
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I1014 08:47:32.081981   15224 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  Starting                 24m                kube-proxy       
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  Starting                 73s                kube-proxy       
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  Starting                 25m                kubelet          Starting kubelet.
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  25m (x8 over 25m)  kubelet          Node multinode-671000 status is now: NodeHasSufficientMemory
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    25m (x8 over 25m)  kubelet          Node multinode-671000 status is now: NodeHasNoDiskPressure
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     25m (x7 over 25m)  kubelet          Node multinode-671000 status is now: NodeHasSufficientPID
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    24m                kubelet          Node multinode-671000 status is now: NodeHasNoDiskPressure
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  24m                kubelet          Node multinode-671000 status is now: NodeHasSufficientMemory
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     24m                kubelet          Node multinode-671000 status is now: NodeHasSufficientPID
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  Starting                 24m                kubelet          Starting kubelet.
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  RegisteredNode           24m                node-controller  Node multinode-671000 event: Registered Node multinode-671000 in Controller
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  NodeReady                24m                kubelet          Node multinode-671000 status is now: NodeReady
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  Starting                 83s                kubelet          Starting kubelet.
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  82s (x8 over 83s)  kubelet          Node multinode-671000 status is now: NodeHasSufficientMemory
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    82s (x8 over 83s)  kubelet          Node multinode-671000 status is now: NodeHasNoDiskPressure
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     82s (x7 over 83s)  kubelet          Node multinode-671000 status is now: NodeHasSufficientPID
	I1014 08:47:32.081981   15224 command_runner.go:130] >   Normal  RegisteredNode           74s                node-controller  Node multinode-671000 event: Registered Node multinode-671000 in Controller
	I1014 08:47:32.081981   15224 command_runner.go:130] > Name:               multinode-671000-m02
	I1014 08:47:32.081981   15224 command_runner.go:130] > Roles:              <none>
	I1014 08:47:32.081981   15224 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I1014 08:47:32.081981   15224 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I1014 08:47:32.081981   15224 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I1014 08:47:32.081981   15224 command_runner.go:130] >                     kubernetes.io/hostname=multinode-671000-m02
	I1014 08:47:32.081981   15224 command_runner.go:130] >                     kubernetes.io/os=linux
	I1014 08:47:32.081981   15224 command_runner.go:130] >                     minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	I1014 08:47:32.081981   15224 command_runner.go:130] >                     minikube.k8s.io/name=multinode-671000
	I1014 08:47:32.081981   15224 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I1014 08:47:32.081981   15224 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_10_14T08_25_50_0700
	I1014 08:47:32.081981   15224 command_runner.go:130] >                     minikube.k8s.io/version=v1.34.0
	I1014 08:47:32.082955   15224 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I1014 08:47:32.082955   15224 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I1014 08:47:32.082955   15224 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I1014 08:47:32.082955   15224 command_runner.go:130] > CreationTimestamp:  Mon, 14 Oct 2024 15:25:49 +0000
	I1014 08:47:32.082955   15224 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I1014 08:47:32.082955   15224 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I1014 08:47:32.082955   15224 command_runner.go:130] > Unschedulable:      false
	I1014 08:47:32.082955   15224 command_runner.go:130] > Lease:
	I1014 08:47:32.082955   15224 command_runner.go:130] >   HolderIdentity:  multinode-671000-m02
	I1014 08:47:32.082955   15224 command_runner.go:130] >   AcquireTime:     <unset>
	I1014 08:47:32.082955   15224 command_runner.go:130] >   RenewTime:       Mon, 14 Oct 2024 15:43:00 +0000
	I1014 08:47:32.082955   15224 command_runner.go:130] > Conditions:
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I1014 08:47:32.082955   15224 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I1014 08:47:32.082955   15224 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 14 Oct 2024 15:42:08 +0000   Mon, 14 Oct 2024 15:43:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:32.082955   15224 command_runner.go:130] >   DiskPressure     Unknown   Mon, 14 Oct 2024 15:42:08 +0000   Mon, 14 Oct 2024 15:43:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:32.082955   15224 command_runner.go:130] >   PIDPressure      Unknown   Mon, 14 Oct 2024 15:42:08 +0000   Mon, 14 Oct 2024 15:43:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Ready            Unknown   Mon, 14 Oct 2024 15:42:08 +0000   Mon, 14 Oct 2024 15:43:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:32.082955   15224 command_runner.go:130] > Addresses:
	I1014 08:47:32.082955   15224 command_runner.go:130] >   InternalIP:  172.20.109.137
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Hostname:    multinode-671000-m02
	I1014 08:47:32.082955   15224 command_runner.go:130] > Capacity:
	I1014 08:47:32.082955   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:32.082955   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:32.082955   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:32.082955   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:32.082955   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:32.082955   15224 command_runner.go:130] > Allocatable:
	I1014 08:47:32.082955   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:32.082955   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:32.082955   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:32.082955   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:32.082955   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:32.082955   15224 command_runner.go:130] > System Info:
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Machine ID:                 d57a3470ff3c4f03abf025a19c5c23d9
	I1014 08:47:32.082955   15224 command_runner.go:130] >   System UUID:                d36e677f-0d87-a94f-af28-d8324326f88f
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Boot ID:                    33649cab-156e-4789-8adf-a6fc060b3de7
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Kernel Version:             5.10.207
	I1014 08:47:32.082955   15224 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Operating System:           linux
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Architecture:               amd64
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Container Runtime Version:  docker://27.3.1
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Kubelet Version:            v1.31.1
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Kube-Proxy Version:         v1.31.1
	I1014 08:47:32.082955   15224 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I1014 08:47:32.082955   15224 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I1014 08:47:32.082955   15224 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I1014 08:47:32.082955   15224 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I1014 08:47:32.082955   15224 command_runner.go:130] >   default                     busybox-7dff88458-bnqj6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I1014 08:47:32.082955   15224 command_runner.go:130] >   kube-system                 kindnet-rgbjf              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	I1014 08:47:32.082955   15224 command_runner.go:130] >   kube-system                 kube-proxy-kbpjf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I1014 08:47:32.082955   15224 command_runner.go:130] > Allocated resources:
	I1014 08:47:32.082955   15224 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Resource           Requests   Limits
	I1014 08:47:32.082955   15224 command_runner.go:130] >   --------           --------   ------
	I1014 08:47:32.082955   15224 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I1014 08:47:32.082955   15224 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I1014 08:47:32.082955   15224 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I1014 08:47:32.082955   15224 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I1014 08:47:32.082955   15224 command_runner.go:130] > Events:
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I1014 08:47:32.082955   15224 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-671000-m02 status is now: NodeHasSufficientMemory
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-671000-m02 status is now: NodeHasNoDiskPressure
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-671000-m02 status is now: NodeHasSufficientPID
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-671000-m02 event: Registered Node multinode-671000-m02 in Controller
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Normal  NodeReady                21m                kubelet          Node multinode-671000-m02 status is now: NodeReady
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Normal  NodeNotReady             3m47s              node-controller  Node multinode-671000-m02 status is now: NodeNotReady
	I1014 08:47:32.082955   15224 command_runner.go:130] >   Normal  RegisteredNode           74s                node-controller  Node multinode-671000-m02 event: Registered Node multinode-671000-m02 in Controller
	I1014 08:47:32.082955   15224 command_runner.go:130] > Name:               multinode-671000-m03
	I1014 08:47:32.082955   15224 command_runner.go:130] > Roles:              <none>
	I1014 08:47:32.082955   15224 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I1014 08:47:32.082955   15224 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I1014 08:47:32.082955   15224 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I1014 08:47:32.082955   15224 command_runner.go:130] >                     kubernetes.io/hostname=multinode-671000-m03
	I1014 08:47:32.082955   15224 command_runner.go:130] >                     kubernetes.io/os=linux
	I1014 08:47:32.082955   15224 command_runner.go:130] >                     minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	I1014 08:47:32.082955   15224 command_runner.go:130] >                     minikube.k8s.io/name=multinode-671000
	I1014 08:47:32.082955   15224 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I1014 08:47:32.082955   15224 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_10_14T08_41_35_0700
	I1014 08:47:32.083951   15224 command_runner.go:130] >                     minikube.k8s.io/version=v1.34.0
	I1014 08:47:32.083951   15224 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I1014 08:47:32.083951   15224 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I1014 08:47:32.083951   15224 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I1014 08:47:32.083951   15224 command_runner.go:130] > CreationTimestamp:  Mon, 14 Oct 2024 15:41:34 +0000
	I1014 08:47:32.083951   15224 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I1014 08:47:32.083951   15224 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I1014 08:47:32.083951   15224 command_runner.go:130] > Unschedulable:      false
	I1014 08:47:32.083951   15224 command_runner.go:130] > Lease:
	I1014 08:47:32.083951   15224 command_runner.go:130] >   HolderIdentity:  multinode-671000-m03
	I1014 08:47:32.083951   15224 command_runner.go:130] >   AcquireTime:     <unset>
	I1014 08:47:32.083951   15224 command_runner.go:130] >   RenewTime:       Mon, 14 Oct 2024 15:42:46 +0000
	I1014 08:47:32.083951   15224 command_runner.go:130] > Conditions:
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I1014 08:47:32.083951   15224 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I1014 08:47:32.083951   15224 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 14 Oct 2024 15:41:53 +0000   Mon, 14 Oct 2024 15:43:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:32.083951   15224 command_runner.go:130] >   DiskPressure     Unknown   Mon, 14 Oct 2024 15:41:53 +0000   Mon, 14 Oct 2024 15:43:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:32.083951   15224 command_runner.go:130] >   PIDPressure      Unknown   Mon, 14 Oct 2024 15:41:53 +0000   Mon, 14 Oct 2024 15:43:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Ready            Unknown   Mon, 14 Oct 2024 15:41:53 +0000   Mon, 14 Oct 2024 15:43:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I1014 08:47:32.083951   15224 command_runner.go:130] > Addresses:
	I1014 08:47:32.083951   15224 command_runner.go:130] >   InternalIP:  172.20.102.29
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Hostname:    multinode-671000-m03
	I1014 08:47:32.083951   15224 command_runner.go:130] > Capacity:
	I1014 08:47:32.083951   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:32.083951   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:32.083951   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:32.083951   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:32.083951   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:32.083951   15224 command_runner.go:130] > Allocatable:
	I1014 08:47:32.083951   15224 command_runner.go:130] >   cpu:                2
	I1014 08:47:32.083951   15224 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I1014 08:47:32.083951   15224 command_runner.go:130] >   hugepages-2Mi:      0
	I1014 08:47:32.083951   15224 command_runner.go:130] >   memory:             2164264Ki
	I1014 08:47:32.083951   15224 command_runner.go:130] >   pods:               110
	I1014 08:47:32.083951   15224 command_runner.go:130] > System Info:
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Machine ID:                 6da8cf5e96c04d55b9129d0893534bf2
	I1014 08:47:32.083951   15224 command_runner.go:130] >   System UUID:                49616488-815a-3f43-8f47-13dbf29b6ca7
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Boot ID:                    d9fe58fb-ac8e-4430-9563-1b3e9fd35ffd
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Kernel Version:             5.10.207
	I1014 08:47:32.083951   15224 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Operating System:           linux
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Architecture:               amd64
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Container Runtime Version:  docker://27.3.1
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Kubelet Version:            v1.31.1
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Kube-Proxy Version:         v1.31.1
	I1014 08:47:32.083951   15224 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I1014 08:47:32.083951   15224 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I1014 08:47:32.083951   15224 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I1014 08:47:32.083951   15224 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I1014 08:47:32.083951   15224 command_runner.go:130] >   kube-system                 kindnet-5rqxq       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I1014 08:47:32.083951   15224 command_runner.go:130] >   kube-system                 kube-proxy-n6txs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I1014 08:47:32.083951   15224 command_runner.go:130] > Allocated resources:
	I1014 08:47:32.083951   15224 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Resource           Requests   Limits
	I1014 08:47:32.083951   15224 command_runner.go:130] >   --------           --------   ------
	I1014 08:47:32.083951   15224 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I1014 08:47:32.083951   15224 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I1014 08:47:32.083951   15224 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I1014 08:47:32.083951   15224 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I1014 08:47:32.083951   15224 command_runner.go:130] > Events:
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I1014 08:47:32.083951   15224 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  Starting                 5m53s                  kube-proxy       
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-671000-m03 status is now: NodeHasSufficientMemory
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-671000-m03 status is now: NodeHasNoDiskPressure
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-671000-m03 status is now: NodeHasSufficientPID
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-671000-m03 status is now: NodeReady
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  Starting                 5m58s                  kubelet          Starting kubelet.
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m58s (x2 over 5m58s)  kubelet          Node multinode-671000-m03 status is now: NodeHasSufficientMemory
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m58s (x2 over 5m58s)  kubelet          Node multinode-671000-m03 status is now: NodeHasNoDiskPressure
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m58s (x2 over 5m58s)  kubelet          Node multinode-671000-m03 status is now: NodeHasSufficientPID
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m58s                  kubelet          Updated Node Allocatable limit across pods
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  RegisteredNode           5m53s                  node-controller  Node multinode-671000-m03 event: Registered Node multinode-671000-m03 in Controller
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  NodeReady                5m39s                  kubelet          Node multinode-671000-m03 status is now: NodeReady
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  NodeNotReady             4m3s                   node-controller  Node multinode-671000-m03 status is now: NodeNotReady
	I1014 08:47:32.083951   15224 command_runner.go:130] >   Normal  RegisteredNode           74s                    node-controller  Node multinode-671000-m03 event: Registered Node multinode-671000-m03 in Controller
	I1014 08:47:32.093961   15224 logs.go:123] Gathering logs for kube-scheduler [661e75bbf6b4] ...
	I1014 08:47:32.093961   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 661e75bbf6b4"
	I1014 08:47:32.122585   15224 command_runner.go:130] ! I1014 15:22:34.688194       1 serving.go:386] Generated self-signed cert in-memory
	I1014 08:47:32.122585   15224 command_runner.go:130] ! W1014 15:22:36.199586       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I1014 08:47:32.122585   15224 command_runner.go:130] ! W1014 15:22:36.199661       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I1014 08:47:32.122585   15224 command_runner.go:130] ! W1014 15:22:36.199675       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	I1014 08:47:32.122585   15224 command_runner.go:130] ! W1014 15:22:36.199681       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 08:47:32.122585   15224 command_runner.go:130] ! I1014 15:22:36.288536       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1014 08:47:32.122585   15224 command_runner.go:130] ! I1014 15:22:36.288649       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:32.122585   15224 command_runner.go:130] ! I1014 15:22:36.292628       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1014 08:47:32.122585   15224 command_runner.go:130] ! I1014 15:22:36.292942       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 08:47:32.122585   15224 command_runner.go:130] ! I1014 15:22:36.293038       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 08:47:32.122585   15224 command_runner.go:130] ! I1014 15:22:36.293102       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 08:47:32.122585   15224 command_runner.go:130] ! W1014 15:22:36.298034       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1014 08:47:32.122585   15224 command_runner.go:130] ! E1014 15:22:36.298090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.122585   15224 command_runner.go:130] ! W1014 15:22:36.298377       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:32.122585   15224 command_runner.go:130] ! E1014 15:22:36.298420       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.122585   15224 command_runner.go:130] ! W1014 15:22:36.298587       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I1014 08:47:32.122585   15224 command_runner.go:130] ! E1014 15:22:36.298642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.123111   15224 command_runner.go:130] ! W1014 15:22:36.298730       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1014 08:47:32.123111   15224 command_runner.go:130] ! E1014 15:22:36.298855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.123111   15224 command_runner.go:130] ! W1014 15:22:36.299272       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1014 08:47:32.123111   15224 command_runner.go:130] ! E1014 15:22:36.299314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.123297   15224 command_runner.go:130] ! W1014 15:22:36.299416       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1014 08:47:32.123408   15224 command_runner.go:130] ! E1014 15:22:36.299618       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.123408   15224 command_runner.go:130] ! W1014 15:22:36.299693       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I1014 08:47:32.123541   15224 command_runner.go:130] ! E1014 15:22:36.299710       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.123594   15224 command_runner.go:130] ! W1014 15:22:36.299857       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1014 08:47:32.123594   15224 command_runner.go:130] ! E1014 15:22:36.299920       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.123594   15224 command_runner.go:130] ! W1014 15:22:36.302822       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1014 08:47:32.123686   15224 command_runner.go:130] ! E1014 15:22:36.303096       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1014 08:47:32.123767   15224 command_runner.go:130] ! W1014 15:22:36.303242       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1014 08:47:32.123830   15224 command_runner.go:130] ! E1014 15:22:36.303288       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.123830   15224 command_runner.go:130] ! W1014 15:22:36.303391       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1014 08:47:32.123830   15224 command_runner.go:130] ! E1014 15:22:36.303426       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.123830   15224 command_runner.go:130] ! W1014 15:22:36.303605       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:32.123830   15224 command_runner.go:130] ! E1014 15:22:36.303643       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.123830   15224 command_runner.go:130] ! W1014 15:22:36.303739       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:32.123830   15224 command_runner.go:130] ! E1014 15:22:36.303771       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.123830   15224 command_runner.go:130] ! W1014 15:22:36.303825       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1014 08:47:32.123830   15224 command_runner.go:130] ! E1014 15:22:36.303860       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.123830   15224 command_runner.go:130] ! W1014 15:22:36.304041       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:32.123830   15224 command_runner.go:130] ! E1014 15:22:36.304079       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.123830   15224 command_runner.go:130] ! W1014 15:22:37.145637       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:32.124371   15224 command_runner.go:130] ! E1014 15:22:37.146051       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.124371   15224 command_runner.go:130] ! W1014 15:22:37.146415       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1014 08:47:32.124467   15224 command_runner.go:130] ! E1014 15:22:37.146705       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.124498   15224 command_runner.go:130] ! W1014 15:22:37.189116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1014 08:47:32.124581   15224 command_runner.go:130] ! E1014 15:22:37.189252       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.124660   15224 command_runner.go:130] ! W1014 15:22:37.205810       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:32.124727   15224 command_runner.go:130] ! E1014 15:22:37.206152       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.124837   15224 command_runner.go:130] ! W1014 15:22:37.269786       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1014 08:47:32.124884   15224 command_runner.go:130] ! E1014 15:22:37.269856       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.124884   15224 command_runner.go:130] ! W1014 15:22:37.270258       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:32.124884   15224 command_runner.go:130] ! E1014 15:22:37.270286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.124884   15224 command_runner.go:130] ! W1014 15:22:37.290857       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1014 08:47:32.124884   15224 command_runner.go:130] ! E1014 15:22:37.290997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.124884   15224 command_runner.go:130] ! W1014 15:22:37.304519       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1014 08:47:32.124884   15224 command_runner.go:130] ! E1014 15:22:37.305020       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1014 08:47:32.124884   15224 command_runner.go:130] ! W1014 15:22:37.328746       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I1014 08:47:32.124884   15224 command_runner.go:130] ! E1014 15:22:37.329049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.124884   15224 command_runner.go:130] ! W1014 15:22:37.338059       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I1014 08:47:32.124884   15224 command_runner.go:130] ! E1014 15:22:37.338269       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.124884   15224 command_runner.go:130] ! W1014 15:22:37.394728       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1014 08:47:32.124884   15224 command_runner.go:130] ! E1014 15:22:37.394803       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.124884   15224 command_runner.go:130] ! W1014 15:22:37.445455       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1014 08:47:32.125416   15224 command_runner.go:130] ! E1014 15:22:37.445612       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.125416   15224 command_runner.go:130] ! W1014 15:22:37.490285       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1014 08:47:32.125509   15224 command_runner.go:130] ! E1014 15:22:37.490817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.125555   15224 command_runner.go:130] ! W1014 15:22:37.537304       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1014 08:47:32.125585   15224 command_runner.go:130] ! E1014 15:22:37.537396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.125585   15224 command_runner.go:130] ! W1014 15:22:37.713713       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I1014 08:47:32.125585   15224 command_runner.go:130] ! E1014 15:22:37.713777       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 08:47:32.125585   15224 command_runner.go:130] ! I1014 15:22:39.593596       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 08:47:32.125585   15224 command_runner.go:130] ! I1014 15:43:46.388691       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1014 08:47:32.125585   15224 command_runner.go:130] ! I1014 15:43:46.388783       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1014 08:47:32.125585   15224 command_runner.go:130] ! I1014 15:43:46.389141       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 08:47:32.125585   15224 command_runner.go:130] ! E1014 15:43:46.389549       1 run.go:72] "command failed" err="finished without leader elect"
	I1014 08:47:32.138622   15224 logs.go:123] Gathering logs for etcd [48c8492e231e] ...
	I1014 08:47:32.138622   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 48c8492e231e"
	I1014 08:47:32.168336   15224 command_runner.go:130] ! {"level":"warn","ts":"2024-10-14T15:46:11.845953Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1014 08:47:32.168336   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.848739Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.20.106.123:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.20.106.123:2380","--initial-cluster=multinode-671000=https://172.20.106.123:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.20.106.123:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.20.106.123:2380","--name=multinode-671000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I1014 08:47:32.168336   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.848857Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I1014 08:47:32.168336   15224 command_runner.go:130] ! {"level":"warn","ts":"2024-10-14T15:46:11.848886Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I1014 08:47:32.168336   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.848900Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://172.20.106.123:2380"]}
	I1014 08:47:32.168336   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.848962Z","caller":"embed/etcd.go:496","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.854418Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.20.106.123:2379"]}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.857036Z","caller":"embed/etcd.go:310","msg":"starting an etcd server","etcd-version":"3.5.15","git-sha":"9a5533382","go-version":"go1.21.12","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-671000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.20.106.123:2380"],"listen-peer-urls":["https://172.20.106.123:2380"],"advertise-client-urls":["https://172.20.106.123:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.20.106.123:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"in
itial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.899392Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"40.66952ms"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.949173Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.984197Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"2dcbff584edb18cc","local-member-id":"782c48cbdf98397b","commit-index":2088}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.985089Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b switched to configuration voters=()"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.987739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b became follower at term 2"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:11.987772Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 782c48cbdf98397b [peers: [], term: 2, commit: 2088, applied: 0, lastindex: 2088, lastterm: 2]"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"warn","ts":"2024-10-14T15:46:12.003567Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.010981Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1396}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.025362Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":1813}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.035174Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.045608Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"782c48cbdf98397b","timeout":"7s"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.046705Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"782c48cbdf98397b"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.046807Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"782c48cbdf98397b","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.047198Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.047977Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.048044Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.048058Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.048736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b switched to configuration voters=(8659376223993477499)"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.049262Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2dcbff584edb18cc","local-member-id":"782c48cbdf98397b","added-peer-id":"782c48cbdf98397b","added-peer-peer-urls":["https://172.20.100.167:2380"]}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.049694Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2dcbff584edb18cc","local-member-id":"782c48cbdf98397b","cluster-version":"3.5"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.049815Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.056204Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.062166Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.062574Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"782c48cbdf98397b","initial-advertise-peer-urls":["https://172.20.106.123:2380"],"listen-peer-urls":["https://172.20.106.123:2380"],"advertise-client-urls":["https://172.20.106.123:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.20.106.123:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.062654Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.062749Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"172.20.106.123:2380"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:12.062764Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"172.20.106.123:2380"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b is starting a new election at term 2"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b became pre-candidate at term 2"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b received MsgPreVoteResp from 782c48cbdf98397b at term 2"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b became candidate at term 3"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b received MsgVoteResp from 782c48cbdf98397b at term 3"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b became leader at term 3"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.489487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 782c48cbdf98397b elected leader 782c48cbdf98397b at term 3"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.496949Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.496902Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"782c48cbdf98397b","local-member-attributes":"{Name:multinode-671000 ClientURLs:[https://172.20.106.123:2379]}","request-path":"/0/members/782c48cbdf98397b/attributes","cluster-id":"2dcbff584edb18cc","publish-timeout":"7s"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.497822Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.499586Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.499631Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.500815Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.502392Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.20.106.123:2379"}
	I1014 08:47:32.169309   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.503879Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I1014 08:47:32.170315   15224 command_runner.go:130] ! {"level":"info","ts":"2024-10-14T15:46:13.505686Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I1014 08:47:32.176308   15224 logs.go:123] Gathering logs for kube-proxy [ea19428d7036] ...
	I1014 08:47:32.176308   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea19428d7036"
	I1014 08:47:32.205310   15224 command_runner.go:130] ! I1014 15:22:47.466748       1 server_linux.go:66] "Using iptables proxy"
	I1014 08:47:32.205985   15224 command_runner.go:130] ! E1014 15:22:47.511018       1 proxier.go:734] "Error cleaning up nftables rules" err=<
	I1014 08:47:32.205985   15224 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I1014 08:47:32.205985   15224 command_runner.go:130] ! 	add table ip kube-proxy
	I1014 08:47:32.205985   15224 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I1014 08:47:32.206058   15224 command_runner.go:130] !  >
	I1014 08:47:32.206058   15224 command_runner.go:130] ! E1014 15:22:47.546236       1 proxier.go:734] "Error cleaning up nftables rules" err=<
	I1014 08:47:32.206058   15224 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I1014 08:47:32.206058   15224 command_runner.go:130] ! 	add table ip6 kube-proxy
	I1014 08:47:32.206058   15224 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I1014 08:47:32.206058   15224 command_runner.go:130] !  >
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.606437       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.20.100.167"]
	I1014 08:47:32.206058   15224 command_runner.go:130] ! E1014 15:22:47.606505       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.788755       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.788820       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.788852       1 server_linux.go:169] "Using iptables Proxier"
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.807050       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.807424       1 server.go:483] "Version info" version="v1.31.1"
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.807440       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.810361       1 config.go:199] "Starting service config controller"
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.810416       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.810600       1 config.go:105] "Starting endpoint slice config controller"
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.810629       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.811297       1 config.go:328] "Starting node config controller"
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.811309       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.911443       1 shared_informer.go:320] Caches are synced for node config
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.911791       1 shared_informer.go:320] Caches are synced for service config
	I1014 08:47:32.206058   15224 command_runner.go:130] ! I1014 15:22:47.911866       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 08:47:32.209234   15224 logs.go:123] Gathering logs for kube-controller-manager [8af48c446f7e] ...
	I1014 08:47:32.209234   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8af48c446f7e"
	I1014 08:47:32.239806   15224 command_runner.go:130] ! I1014 15:46:12.989235       1 serving.go:386] Generated self-signed cert in-memory
	I1014 08:47:32.239806   15224 command_runner.go:130] ! I1014 15:46:13.820617       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I1014 08:47:32.239917   15224 command_runner.go:130] ! I1014 15:46:13.820897       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:32.239917   15224 command_runner.go:130] ! I1014 15:46:13.823101       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1014 08:47:32.239917   15224 command_runner.go:130] ! I1014 15:46:13.823494       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 08:47:32.239917   15224 command_runner.go:130] ! I1014 15:46:13.824132       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1014 08:47:32.239917   15224 command_runner.go:130] ! I1014 15:46:13.824214       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 08:47:32.239917   15224 command_runner.go:130] ! I1014 15:46:17.208145       1 controllermanager.go:797] "Started controller" controller="serviceaccount-token-controller"
	I1014 08:47:32.239917   15224 command_runner.go:130] ! I1014 15:46:17.211496       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I1014 08:47:32.240106   15224 command_runner.go:130] ! I1014 15:46:17.268813       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I1014 08:47:32.240126   15224 command_runner.go:130] ! I1014 15:46:17.269727       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I1014 08:47:32.240215   15224 command_runner.go:130] ! I1014 15:46:17.270864       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I1014 08:47:32.240215   15224 command_runner.go:130] ! I1014 15:46:17.271094       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I1014 08:47:32.240215   15224 command_runner.go:130] ! I1014 15:46:17.271857       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I1014 08:47:32.240287   15224 command_runner.go:130] ! I1014 15:46:17.271962       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I1014 08:47:32.240344   15224 command_runner.go:130] ! I1014 15:46:17.272049       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I1014 08:47:32.240384   15224 command_runner.go:130] ! I1014 15:46:17.272075       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I1014 08:47:32.240384   15224 command_runner.go:130] ! I1014 15:46:17.273540       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I1014 08:47:32.240384   15224 command_runner.go:130] ! I1014 15:46:17.274245       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I1014 08:47:32.240454   15224 command_runner.go:130] ! I1014 15:46:17.274579       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I1014 08:47:32.240479   15224 command_runner.go:130] ! I1014 15:46:17.274747       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.274772       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.275348       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.275380       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.275397       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.275571       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.275603       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! W1014 15:46:17.275618       1 shared_informer.go:597] resyncPeriod 13h32m18.096579392s is smaller than resyncCheckPeriod 20h55m54.648340273s and the informer has already started. Changing it to 20h55m54.648340273s
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.276096       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.276150       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.276197       1 controllermanager.go:797] "Started controller" controller="resourcequota-controller"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.276213       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.276260       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.276359       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.283642       1 controllermanager.go:797] "Started controller" controller="replicaset-controller"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.284697       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.284913       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.288417       1 controllermanager.go:797] "Started controller" controller="cronjob-controller"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.289073       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.289091       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.292212       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-approving-controller"
	I1014 08:47:32.240508   15224 command_runner.go:130] ! I1014 15:46:17.292573       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I1014 08:47:32.241070   15224 command_runner.go:130] ! I1014 15:46:17.292591       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I1014 08:47:32.241070   15224 command_runner.go:130] ! I1014 15:46:17.295276       1 controllermanager.go:797] "Started controller" controller="clusterrole-aggregation-controller"
	I1014 08:47:32.241070   15224 command_runner.go:130] ! I1014 15:46:17.295785       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I1014 08:47:32.241070   15224 command_runner.go:130] ! I1014 15:46:17.298756       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I1014 08:47:32.241070   15224 command_runner.go:130] ! I1014 15:46:17.299107       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I1014 08:47:32.241070   15224 command_runner.go:130] ! I1014 15:46:17.299997       1 controllermanager.go:797] "Started controller" controller="endpointslice-mirroring-controller"
	I1014 08:47:32.241070   15224 command_runner.go:130] ! I1014 15:46:17.302040       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I1014 08:47:32.241070   15224 command_runner.go:130] ! I1014 15:46:17.302058       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I1014 08:47:32.241260   15224 command_runner.go:130] ! I1014 15:46:17.305668       1 controllermanager.go:797] "Started controller" controller="replicationcontroller-controller"
	I1014 08:47:32.241311   15224 command_runner.go:130] ! I1014 15:46:17.308801       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I1014 08:47:32.241311   15224 command_runner.go:130] ! I1014 15:46:17.308819       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I1014 08:47:32.241311   15224 command_runner.go:130] ! I1014 15:46:17.318320       1 shared_informer.go:320] Caches are synced for tokens
	I1014 08:47:32.241415   15224 command_runner.go:130] ! I1014 15:46:17.329856       1 controllermanager.go:797] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I1014 08:47:32.241444   15224 command_runner.go:130] ! I1014 15:46:17.330990       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I1014 08:47:32.241477   15224 command_runner.go:130] ! I1014 15:46:17.331395       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I1014 08:47:32.241477   15224 command_runner.go:130] ! I1014 15:46:17.345566       1 controllermanager.go:797] "Started controller" controller="disruption-controller"
	I1014 08:47:32.241562   15224 command_runner.go:130] ! I1014 15:46:17.345806       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I1014 08:47:32.241628   15224 command_runner.go:130] ! I1014 15:46:17.345841       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.345937       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I1014 08:47:32.241658   15224 command_runner.go:130] ! E1014 15:46:17.350088       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.350237       1 controllermanager.go:775] "Warning: skipping controller" controller="service-lb-controller"
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.350277       1 controllermanager.go:775] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.359040       1 controllermanager.go:797] "Started controller" controller="endpoints-controller"
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.360243       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.360265       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.362115       1 controllermanager.go:797] "Started controller" controller="pod-garbage-collector-controller"
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.362235       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.362245       1 shared_informer.go:313] Waiting for caches to sync for GC
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.364537       1 controllermanager.go:797] "Started controller" controller="serviceaccount-controller"
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.364725       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.364738       1 shared_informer.go:313] Waiting for caches to sync for service account
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.367152       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.367373       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.369619       1 controllermanager.go:797] "Started controller" controller="bootstrap-signer-controller"
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.370097       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.373109       1 controllermanager.go:797] "Started controller" controller="token-cleaner-controller"
	I1014 08:47:32.241658   15224 command_runner.go:130] ! I1014 15:46:17.373475       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.373486       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.373493       1 shared_informer.go:320] Caches are synced for token_cleaner
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.375506       1 controllermanager.go:797] "Started controller" controller="ttl-after-finished-controller"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.375684       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.375694       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.379552       1 controllermanager.go:797] "Started controller" controller="ephemeral-volume-controller"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.380063       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.380270       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.413079       1 controllermanager.go:797] "Started controller" controller="namespace-controller"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.413676       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.415689       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.418729       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.418858       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.418983       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.420448       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-signing-controller"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.420573       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.420658       1 controllermanager.go:775] "Warning: skipping controller" controller="node-route-controller"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.420878       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.422022       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.422169       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.422636       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.425521       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.425557       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.425747       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I1014 08:47:32.242468   15224 command_runner.go:130] ! I1014 15:46:17.425569       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:32.243001   15224 command_runner.go:130] ! I1014 15:46:17.425577       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1014 08:47:32.243001   15224 command_runner.go:130] ! E1014 15:46:17.429609       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I1014 08:47:32.243001   15224 command_runner.go:130] ! I1014 15:46:17.429771       1 controllermanager.go:775] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I1014 08:47:32.243001   15224 command_runner.go:130] ! I1014 15:46:17.432720       1 controllermanager.go:797] "Started controller" controller="persistentvolume-attach-detach-controller"
	I1014 08:47:32.243001   15224 command_runner.go:130] ! I1014 15:46:17.433242       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I1014 08:47:32.243134   15224 command_runner.go:130] ! I1014 15:46:17.433509       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I1014 08:47:32.243134   15224 command_runner.go:130] ! I1014 15:46:17.437867       1 controllermanager.go:797] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I1014 08:47:32.243217   15224 command_runner.go:130] ! I1014 15:46:17.438432       1 pvc_protection_controller.go:105] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I1014 08:47:32.243217   15224 command_runner.go:130] ! I1014 15:46:17.438754       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I1014 08:47:32.243308   15224 command_runner.go:130] ! I1014 15:46:17.466996       1 controllermanager.go:797] "Started controller" controller="garbage-collector-controller"
	I1014 08:47:32.243308   15224 command_runner.go:130] ! I1014 15:46:17.467178       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I1014 08:47:32.243308   15224 command_runner.go:130] ! I1014 15:46:17.467191       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1014 08:47:32.243308   15224 command_runner.go:130] ! I1014 15:46:17.467211       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I1014 08:47:32.243377   15224 command_runner.go:130] ! I1014 15:46:17.513974       1 controllermanager.go:797] "Started controller" controller="daemonset-controller"
	I1014 08:47:32.243377   15224 command_runner.go:130] ! I1014 15:46:17.514092       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I1014 08:47:32.243405   15224 command_runner.go:130] ! I1014 15:46:17.514103       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I1014 08:47:32.243453   15224 command_runner.go:130] ! I1014 15:46:17.612272       1 controllermanager.go:797] "Started controller" controller="deployment-controller"
	I1014 08:47:32.243471   15224 command_runner.go:130] ! I1014 15:46:17.612390       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I1014 08:47:32.243499   15224 command_runner.go:130] ! I1014 15:46:17.612405       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I1014 08:47:32.243639   15224 command_runner.go:130] ! I1014 15:46:17.715625       1 controllermanager.go:797] "Started controller" controller="ttl-controller"
	I1014 08:47:32.243672   15224 command_runner.go:130] ! I1014 15:46:17.718491       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I1014 08:47:32.243672   15224 command_runner.go:130] ! I1014 15:46:17.718512       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I1014 08:47:32.243672   15224 command_runner.go:130] ! I1014 15:46:17.762259       1 node_lifecycle_controller.go:430] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I1014 08:47:32.243710   15224 command_runner.go:130] ! I1014 15:46:17.762792       1 controllermanager.go:797] "Started controller" controller="node-lifecycle-controller"
	I1014 08:47:32.243710   15224 command_runner.go:130] ! I1014 15:46:17.763108       1 node_lifecycle_controller.go:464] "Sending events to api server" logger="node-lifecycle-controller"
	I1014 08:47:32.243710   15224 command_runner.go:130] ! I1014 15:46:17.763488       1 node_lifecycle_controller.go:475] "Starting node controller" logger="node-lifecycle-controller"
	I1014 08:47:32.243710   15224 command_runner.go:130] ! I1014 15:46:17.763636       1 shared_informer.go:313] Waiting for caches to sync for taint
	I1014 08:47:32.243710   15224 command_runner.go:130] ! I1014 15:46:17.815269       1 controllermanager.go:797] "Started controller" controller="persistentvolume-expander-controller"
	I1014 08:47:32.243796   15224 command_runner.go:130] ! I1014 15:46:17.815926       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I1014 08:47:32.243796   15224 command_runner.go:130] ! I1014 15:46:17.815820       1 expand_controller.go:328] "Starting expand controller" logger="persistentvolume-expander-controller"
	I1014 08:47:32.243864   15224 command_runner.go:130] ! I1014 15:46:17.815981       1 shared_informer.go:313] Waiting for caches to sync for expand
	I1014 08:47:32.243864   15224 command_runner.go:130] ! I1014 15:46:17.865803       1 controllermanager.go:797] "Started controller" controller="taint-eviction-controller"
	I1014 08:47:32.243892   15224 command_runner.go:130] ! I1014 15:46:17.865833       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I1014 08:47:32.243940   15224 command_runner.go:130] ! I1014 15:46:17.865908       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I1014 08:47:32.244008   15224 command_runner.go:130] ! I1014 15:46:17.865945       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I1014 08:47:32.244008   15224 command_runner.go:130] ! I1014 15:46:17.865986       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I1014 08:47:32.244008   15224 command_runner.go:130] ! I1014 15:46:17.923932       1 controllermanager.go:797] "Started controller" controller="job-controller"
	I1014 08:47:32.244059   15224 command_runner.go:130] ! I1014 15:46:17.924153       1 job_controller.go:226] "Starting job controller" logger="job-controller"
	I1014 08:47:32.244059   15224 command_runner.go:130] ! I1014 15:46:17.924184       1 shared_informer.go:313] Waiting for caches to sync for job
	I1014 08:47:32.244120   15224 command_runner.go:130] ! I1014 15:46:17.978728       1 controllermanager.go:797] "Started controller" controller="root-ca-certificate-publisher-controller"
	I1014 08:47:32.244120   15224 command_runner.go:130] ! I1014 15:46:17.978796       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I1014 08:47:32.244145   15224 command_runner.go:130] ! I1014 15:46:17.978809       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I1014 08:47:32.244145   15224 command_runner.go:130] ! I1014 15:46:18.018003       1 controllermanager.go:797] "Started controller" controller="endpointslice-controller"
	I1014 08:47:32.244145   15224 command_runner.go:130] ! I1014 15:46:18.018177       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I1014 08:47:32.244145   15224 command_runner.go:130] ! I1014 15:46:18.018192       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I1014 08:47:32.244203   15224 command_runner.go:130] ! I1014 15:46:18.077409       1 controllermanager.go:797] "Started controller" controller="statefulset-controller"
	I1014 08:47:32.244203   15224 command_runner.go:130] ! I1014 15:46:18.078007       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I1014 08:47:32.244203   15224 command_runner.go:130] ! I1014 15:46:18.078026       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I1014 08:47:32.244203   15224 command_runner.go:130] ! I1014 15:46:18.245465       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I1014 08:47:32.244282   15224 command_runner.go:130] ! I1014 15:46:18.246368       1 controllermanager.go:797] "Started controller" controller="node-ipam-controller"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.246712       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.246910       1 shared_informer.go:313] Waiting for caches to sync for node
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.264869       1 controllermanager.go:797] "Started controller" controller="persistentvolume-binder-controller"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.264984       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.266232       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.321121       1 controllermanager.go:797] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.323482       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.323903       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.431796       1 controllermanager.go:797] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.431873       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.465851       1 controllermanager.go:797] "Started controller" controller="persistentvolume-protection-controller"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.468767       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.469028       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.485571       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.534720       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.539015       1 shared_informer.go:320] Caches are synced for PVC protection
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.541399       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000\" does not exist"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.541615       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m02\" does not exist"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.549102       1 shared_informer.go:320] Caches are synced for TTL
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.549549       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m03\" does not exist"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.550590       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.551387       1 shared_informer.go:320] Caches are synced for node
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.554673       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.557592       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.558471       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.558669       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.559066       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.559166       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.559144       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.560823       1 shared_informer.go:320] Caches are synced for endpoint
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.563147       1 shared_informer.go:320] Caches are synced for GC
	I1014 08:47:32.244312   15224 command_runner.go:130] ! I1014 15:46:18.566072       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1014 08:47:32.244839   15224 command_runner.go:130] ! I1014 15:46:18.566447       1 shared_informer.go:320] Caches are synced for persistent volume
	I1014 08:47:32.244839   15224 command_runner.go:130] ! I1014 15:46:18.566267       1 shared_informer.go:320] Caches are synced for service account
	I1014 08:47:32.244839   15224 command_runner.go:130] ! I1014 15:46:18.570369       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1014 08:47:32.244839   15224 command_runner.go:130] ! I1014 15:46:18.570522       1 shared_informer.go:320] Caches are synced for PV protection
	I1014 08:47:32.244839   15224 command_runner.go:130] ! I1014 15:46:18.577368       1 shared_informer.go:320] Caches are synced for TTL after finished
	I1014 08:47:32.244839   15224 command_runner.go:130] ! I1014 15:46:18.580187       1 shared_informer.go:320] Caches are synced for crt configmap
	I1014 08:47:32.244839   15224 command_runner.go:130] ! I1014 15:46:18.580534       1 shared_informer.go:320] Caches are synced for ephemeral
	I1014 08:47:32.244839   15224 command_runner.go:130] ! I1014 15:46:18.585372       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1014 08:47:32.244991   15224 command_runner.go:130] ! I1014 15:46:18.593972       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1014 08:47:32.244991   15224 command_runner.go:130] ! I1014 15:46:18.595014       1 shared_informer.go:320] Caches are synced for cronjob
	I1014 08:47:32.244991   15224 command_runner.go:130] ! I1014 15:46:18.600012       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1014 08:47:32.244991   15224 command_runner.go:130] ! I1014 15:46:18.602930       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1014 08:47:32.245069   15224 command_runner.go:130] ! I1014 15:46:18.609680       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1014 08:47:32.245110   15224 command_runner.go:130] ! I1014 15:46:18.613447       1 shared_informer.go:320] Caches are synced for deployment
	I1014 08:47:32.245110   15224 command_runner.go:130] ! I1014 15:46:18.616246       1 shared_informer.go:320] Caches are synced for expand
	I1014 08:47:32.245110   15224 command_runner.go:130] ! I1014 15:46:18.616739       1 shared_informer.go:320] Caches are synced for namespace
	I1014 08:47:32.245110   15224 command_runner.go:130] ! I1014 15:46:18.618534       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1014 08:47:32.245177   15224 command_runner.go:130] ! I1014 15:46:18.625249       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1014 08:47:32.245204   15224 command_runner.go:130] ! I1014 15:46:18.630423       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.632938       1 shared_informer.go:320] Caches are synced for HPA
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.633193       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.634381       1 shared_informer.go:320] Caches are synced for job
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.634623       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.634920       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.649619       1 shared_informer.go:320] Caches are synced for disruption
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.668155       1 shared_informer.go:320] Caches are synced for taint
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.670026       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.680357       1 shared_informer.go:320] Caches are synced for stateful set
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.700582       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.708812       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.714134       1 shared_informer.go:320] Caches are synced for daemon sets
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.718536       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000"
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.718841       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000-m02"
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.719036       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-671000-m03"
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.721210       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="135.448763ms"
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.721514       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="40.1µs"
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.721809       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="136.173363ms"
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.722033       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.5µs"
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.722234       1 node_lifecycle_controller.go:1036] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.777385       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.786812       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:18.833914       1 shared_informer.go:320] Caches are synced for attach detach
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:19.252391       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:19.267855       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 08:47:32.245229   15224 command_runner.go:130] ! I1014 15:46:19.268119       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1014 08:47:32.245754   15224 command_runner.go:130] ! I1014 15:46:59.871635       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:32.245754   15224 command_runner.go:130] ! I1014 15:46:59.892163       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 08:47:32.245754   15224 command_runner.go:130] ! I1014 15:47:03.736416       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1014 08:47:32.245754   15224 command_runner.go:130] ! I1014 15:47:13.821153       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 08:47:32.245754   15224 command_runner.go:130] ! I1014 15:47:20.979721       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="114.5µs"
	I1014 08:47:32.245754   15224 command_runner.go:130] ! I1014 15:47:22.061324       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.05527ms"
	I1014 08:47:32.245754   15224 command_runner.go:130] ! I1014 15:47:22.062652       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.8µs"
	I1014 08:47:32.245883   15224 command_runner.go:130] ! I1014 15:47:22.098955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="28.422114ms"
	I1014 08:47:32.245883   15224 command_runner.go:130] ! I1014 15:47:22.099794       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="313.699µs"
	I1014 08:47:32.245936   15224 command_runner.go:130] ! I1014 15:47:23.920002       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 08:47:32.261536   15224 logs.go:123] Gathering logs for container status ...
	I1014 08:47:32.261536   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 08:47:32.331202   15224 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I1014 08:47:32.331383   15224 command_runner.go:130] > 1adddc667bd90       8c811b4aec35f                                                                                         12 seconds ago       Running             busybox                   1                   9092d17516eb3       busybox-7dff88458-vlp7j
	I1014 08:47:32.331383   15224 command_runner.go:130] > 5d223e2e64fcd       c69fa2e9cbf5f                                                                                         12 seconds ago       Running             coredns                   1                   429b989a1a986       coredns-7c65d6cfc9-fs9ct
	I1014 08:47:32.331383   15224 command_runner.go:130] > 9d526b02ee41c       6e38f40d628db                                                                                         30 seconds ago       Running             storage-provisioner       2                   cdcdd532ba136       storage-provisioner
	I1014 08:47:32.331481   15224 command_runner.go:130] > bba035362eb97       3a5bc24055c9e                                                                                         About a minute ago   Running             kindnet-cni               1                   7bcadf1f0885f       kindnet-wqrx6
	I1014 08:47:32.331481   15224 command_runner.go:130] > c76c258568107       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   cdcdd532ba136       storage-provisioner
	I1014 08:47:32.331481   15224 command_runner.go:130] > e83db276dec37       60c005f310ff3                                                                                         About a minute ago   Running             kube-proxy                1                   6f8bdf552734e       kube-proxy-r74dx
	I1014 08:47:32.331570   15224 command_runner.go:130] > 48c8492e231e1       2e96e5913fc06                                                                                         About a minute ago   Running             etcd                      0                   0697a11790e80       etcd-multinode-671000
	I1014 08:47:32.331685   15224 command_runner.go:130] > 8af48c446f7e1       175ffd71cce3d                                                                                         About a minute ago   Running             kube-controller-manager   1                   7bd4c36606eef       kube-controller-manager-multinode-671000
	I1014 08:47:32.331792   15224 command_runner.go:130] > a834664fc8b80       6bab7719df100                                                                                         About a minute ago   Running             kube-apiserver            0                   6155e8be2d5d7       kube-apiserver-multinode-671000
	I1014 08:47:32.331848   15224 command_runner.go:130] > d428685276e1e       9aa1fad941575                                                                                         About a minute ago   Running             kube-scheduler            1                   1d3033f871fb1       kube-scheduler-multinode-671000
	I1014 08:47:32.331908   15224 command_runner.go:130] > cbf0b40e378b9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   06e529266db4b       busybox-7dff88458-vlp7j
	I1014 08:47:32.331908   15224 command_runner.go:130] > d9831e9f8ce8c       c69fa2e9cbf5f                                                                                         24 minutes ago       Exited              coredns                   0                   2f8cc9a218fef       coredns-7c65d6cfc9-fs9ct
	I1014 08:47:32.331973   15224 command_runner.go:130] > fcdf89a3ac8ce       kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387              24 minutes ago       Exited              kindnet-cni               0                   5e48ddcfdf90a       kindnet-wqrx6
	I1014 08:47:32.331999   15224 command_runner.go:130] > ea19428d70363       60c005f310ff3                                                                                         24 minutes ago       Exited              kube-proxy                0                   7144d8ce208cf       kube-proxy-r74dx
	I1014 08:47:32.331999   15224 command_runner.go:130] > 661e75bbf6b46       9aa1fad941575                                                                                         24 minutes ago       Exited              kube-scheduler            0                   2dc78387553ff       kube-scheduler-multinode-671000
	I1014 08:47:32.332082   15224 command_runner.go:130] > 712aad669c9f6       175ffd71cce3d                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   bfdde08319e32       kube-controller-manager-multinode-671000
	I1014 08:47:32.334192   15224 logs.go:123] Gathering logs for kube-apiserver [a834664fc8b8] ...
	I1014 08:47:32.334798   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a834664fc8b8"
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:12.133612       1 options.go:228] external host was not specified, using 172.20.106.123
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:12.139596       1 server.go:142] Version: v1.31.1
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:12.140322       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:13.070213       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:13.112422       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:13.116622       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:13.116890       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:13.117611       1 instance.go:232] Using reconciler: lease
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:13.606403       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I1014 08:47:32.362254   15224 command_runner.go:130] ! W1014 15:46:13.606961       1 genericapiserver.go:765] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:13.910757       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:13.911096       1 apis.go:105] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:14.140196       1 apis.go:105] API group "storagemigration.k8s.io" is not enabled, skipping.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:14.332586       1 apis.go:105] API group "resource.k8s.io" is not enabled, skipping.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:14.344695       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I1014 08:47:32.362254   15224 command_runner.go:130] ! W1014 15:46:14.344792       1 genericapiserver.go:765] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! W1014 15:46:14.344802       1 genericapiserver.go:765] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:14.345547       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I1014 08:47:32.362254   15224 command_runner.go:130] ! W1014 15:46:14.345645       1 genericapiserver.go:765] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:14.346729       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:14.348142       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I1014 08:47:32.362254   15224 command_runner.go:130] ! W1014 15:46:14.348261       1 genericapiserver.go:765] Skipping API autoscaling/v2beta1 because it has no resources.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! W1014 15:46:14.348272       1 genericapiserver.go:765] Skipping API autoscaling/v2beta2 because it has no resources.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:14.350632       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I1014 08:47:32.362254   15224 command_runner.go:130] ! W1014 15:46:14.350741       1 genericapiserver.go:765] Skipping API batch/v1beta1 because it has no resources.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:14.352378       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I1014 08:47:32.362254   15224 command_runner.go:130] ! W1014 15:46:14.352489       1 genericapiserver.go:765] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! W1014 15:46:14.352501       1 genericapiserver.go:765] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:14.353674       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I1014 08:47:32.362254   15224 command_runner.go:130] ! W1014 15:46:14.353813       1 genericapiserver.go:765] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! W1014 15:46:14.353843       1 genericapiserver.go:765] Skipping API coordination.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:14.355117       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I1014 08:47:32.362254   15224 command_runner.go:130] ! W1014 15:46:14.355256       1 genericapiserver.go:765] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:32.362254   15224 command_runner.go:130] ! I1014 15:46:14.358401       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.358517       1 genericapiserver.go:765] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.358528       1 genericapiserver.go:765] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:14.359534       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.359632       1 genericapiserver.go:765] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.359643       1 genericapiserver.go:765] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:14.360836       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.360942       1 genericapiserver.go:765] Skipping API policy/v1beta1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:14.363702       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.363848       1 genericapiserver.go:765] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.363860       1 genericapiserver.go:765] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:14.364685       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.364801       1 genericapiserver.go:765] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.364812       1 genericapiserver.go:765] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:14.368101       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.368216       1 genericapiserver.go:765] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.368228       1 genericapiserver.go:765] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:14.370008       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:14.371702       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.371808       1 genericapiserver.go:765] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.371818       1 genericapiserver.go:765] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:14.376771       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.376868       1 genericapiserver.go:765] Skipping API apps/v1beta2 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.376877       1 genericapiserver.go:765] Skipping API apps/v1beta1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:14.379998       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.380101       1 genericapiserver.go:765] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.380112       1 genericapiserver.go:765] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:14.380956       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.381059       1 genericapiserver.go:765] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:14.395072       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I1014 08:47:32.363266   15224 command_runner.go:130] ! W1014 15:46:14.395116       1 genericapiserver.go:765] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.014537       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.014702       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.016123       1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.016823       1 secure_serving.go:213] Serving securely on [::]:8443
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.017426       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.018450       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.018766       1 remote_available_controller.go:411] Starting RemoteAvailability controller
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.018850       1 cache.go:32] Waiting for caches to sync for RemoteAvailability controller
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.018985       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.021391       1 controller.go:119] Starting legacy_token_tracking_controller
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.021471       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.021517       1 aggregator.go:169] waiting for initial CRD sync...
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.022050       1 dynamic_serving_content.go:135] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.022573       1 cluster_authentication_trust_controller.go:443] Starting cluster_authentication_trust_controller controller
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.022688       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.022775       1 system_namespaces_controller.go:66] Starting system namespaces controller
	I1014 08:47:32.363266   15224 command_runner.go:130] ! I1014 15:46:15.026778       1 apf_controller.go:377] Starting API Priority and Fairness config controller
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.027043       1 controller.go:78] Starting OpenAPI AggregationController
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.027942       1 customresource_discovery_controller.go:292] Starting DiscoveryController
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.029402       1 local_available_controller.go:156] Starting LocalAvailability controller
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.029447       1 cache.go:32] Waiting for caches to sync for LocalAvailability controller
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.029815       1 apiservice_controller.go:100] Starting APIServiceRegistrationController
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.029850       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.034040       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.034136       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.034690       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.034946       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.082229       1 controller.go:142] Starting OpenAPI controller
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.083838       1 controller.go:90] Starting OpenAPI V3 controller
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.083894       1 naming_controller.go:294] Starting NamingConditionController
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.086443       1 establishing_controller.go:81] Starting EstablishingController
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.087455       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.088333       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.092677       1 crd_finalizer.go:269] Starting CRDFinalizer
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.212597       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.212691       1 policy_source.go:224] refreshing policies
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.221529       1 shared_informer.go:320] Caches are synced for configmaps
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.226910       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.227013       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.229937       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.231898       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.233234       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.234375       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.235151       1 aggregator.go:171] initial CRD sync complete...
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.235400       1 autoregister_controller.go:144] Starting autoregister controller
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.235712       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.235936       1 cache.go:39] Caches are synced for autoregister controller
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.255261       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.256039       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.271561       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:15.319091       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:16.036564       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 08:47:32.364266   15224 command_runner.go:130] ! W1014 15:46:16.558489       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.20.100.167 172.20.106.123]
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:16.560272       1 controller.go:615] quota admission added evaluator for: endpoints
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:16.573015       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:18.229365       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:18.748102       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:18.793266       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:18.985788       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 08:47:32.364266   15224 command_runner.go:130] ! I1014 15:46:19.024530       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 08:47:32.364266   15224 command_runner.go:130] ! W1014 15:46:36.563040       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.20.106.123]
	I1014 08:47:32.371241   15224 logs.go:123] Gathering logs for coredns [d9831e9f8ce8] ...
	I1014 08:47:32.371241   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d9831e9f8ce8"
	I1014 08:47:32.404873   15224 command_runner.go:130] > .:53
	I1014 08:47:32.404873   15224 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	I1014 08:47:32.404873   15224 command_runner.go:130] > CoreDNS-1.11.3
	I1014 08:47:32.404959   15224 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I1014 08:47:32.404959   15224 command_runner.go:130] > [INFO] 127.0.0.1:35483 - 39257 "HINFO IN 8382239991273371198.8905610076788717940. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.074337261s
	I1014 08:47:32.404959   15224 command_runner.go:130] > [INFO] 10.244.1.2:36950 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003062s
	I1014 08:47:32.404959   15224 command_runner.go:130] > [INFO] 10.244.1.2:49277 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.118118924s
	I1014 08:47:32.404959   15224 command_runner.go:130] > [INFO] 10.244.1.2:33122 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.153089702s
	I1014 08:47:32.404959   15224 command_runner.go:130] > [INFO] 10.244.1.2:44549 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.188160849s
	I1014 08:47:32.404959   15224 command_runner.go:130] > [INFO] 10.244.0.3:43390 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191s
	I1014 08:47:32.404959   15224 command_runner.go:130] > [INFO] 10.244.0.3:59817 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000279499s
	I1014 08:47:32.405061   15224 command_runner.go:130] > [INFO] 10.244.0.3:34294 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.0002004s
	I1014 08:47:32.405102   15224 command_runner.go:130] > [INFO] 10.244.0.3:56220 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0002257s
	I1014 08:47:32.405102   15224 command_runner.go:130] > [INFO] 10.244.1.2:44291 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002098s
	I1014 08:47:32.405102   15224 command_runner.go:130] > [INFO] 10.244.1.2:42361 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.17965629s
	I1014 08:47:32.405102   15224 command_runner.go:130] > [INFO] 10.244.1.2:48756 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0002923s
	I1014 08:47:32.405207   15224 command_runner.go:130] > [INFO] 10.244.1.2:53437 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000274799s
	I1014 08:47:32.405232   15224 command_runner.go:130] > [INFO] 10.244.1.2:60026 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.013560692s
	I1014 08:47:32.405232   15224 command_runner.go:130] > [INFO] 10.244.1.2:39241 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001752s
	I1014 08:47:32.405232   15224 command_runner.go:130] > [INFO] 10.244.1.2:36696 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0003084s
	I1014 08:47:32.405232   15224 command_runner.go:130] > [INFO] 10.244.1.2:51603 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001109s
	I1014 08:47:32.405319   15224 command_runner.go:130] > [INFO] 10.244.0.3:37516 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002057s
	I1014 08:47:32.405319   15224 command_runner.go:130] > [INFO] 10.244.0.3:41090 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001329s
	I1014 08:47:32.405319   15224 command_runner.go:130] > [INFO] 10.244.0.3:45330 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001397s
	I1014 08:47:32.405397   15224 command_runner.go:130] > [INFO] 10.244.0.3:46520 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000154699s
	I1014 08:47:32.405419   15224 command_runner.go:130] > [INFO] 10.244.0.3:40342 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001347s
	I1014 08:47:32.405419   15224 command_runner.go:130] > [INFO] 10.244.0.3:60385 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001518s
	I1014 08:47:32.405419   15224 command_runner.go:130] > [INFO] 10.244.0.3:56338 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001527s
	I1014 08:47:32.405419   15224 command_runner.go:130] > [INFO] 10.244.0.3:50480 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0002505s
	I1014 08:47:32.405500   15224 command_runner.go:130] > [INFO] 10.244.1.2:60726 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002297s
	I1014 08:47:32.405522   15224 command_runner.go:130] > [INFO] 10.244.1.2:44347 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002249s
	I1014 08:47:32.405522   15224 command_runner.go:130] > [INFO] 10.244.1.2:49092 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001075s
	I1014 08:47:32.405586   15224 command_runner.go:130] > [INFO] 10.244.1.2:54937 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068s
	I1014 08:47:32.405615   15224 command_runner.go:130] > [INFO] 10.244.0.3:59749 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002552s
	I1014 08:47:32.405615   15224 command_runner.go:130] > [INFO] 10.244.0.3:56008 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002499s
	I1014 08:47:32.405615   15224 command_runner.go:130] > [INFO] 10.244.0.3:60338 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001065s
	I1014 08:47:32.405615   15224 command_runner.go:130] > [INFO] 10.244.0.3:49712 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001634s
	I1014 08:47:32.405735   15224 command_runner.go:130] > [INFO] 10.244.1.2:56508 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001577s
	I1014 08:47:32.405795   15224 command_runner.go:130] > [INFO] 10.244.1.2:45387 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001545s
	I1014 08:47:32.405795   15224 command_runner.go:130] > [INFO] 10.244.1.2:39608 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001419s
	I1014 08:47:32.405795   15224 command_runner.go:130] > [INFO] 10.244.1.2:53878 - 5 "PTR IN 1.96.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0001923s
	I1014 08:47:32.405795   15224 command_runner.go:130] > [INFO] 10.244.0.3:58589 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002828s
	I1014 08:47:32.405892   15224 command_runner.go:130] > [INFO] 10.244.0.3:58608 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001609s
	I1014 08:47:32.405892   15224 command_runner.go:130] > [INFO] 10.244.0.3:52599 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000633s
	I1014 08:47:32.405892   15224 command_runner.go:130] > [INFO] 10.244.0.3:58233 - 5 "PTR IN 1.96.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0001058s
	I1014 08:47:32.405956   15224 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I1014 08:47:32.405956   15224 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I1014 08:47:32.409048   15224 logs.go:123] Gathering logs for kube-scheduler [d428685276e1] ...
	I1014 08:47:32.409048   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d428685276e1"
	I1014 08:47:32.436500   15224 command_runner.go:130] ! I1014 15:46:12.515594       1 serving.go:386] Generated self-signed cert in-memory
	I1014 08:47:32.437069   15224 command_runner.go:130] ! W1014 15:46:15.152686       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I1014 08:47:32.437069   15224 command_runner.go:130] ! W1014 15:46:15.152818       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I1014 08:47:32.437069   15224 command_runner.go:130] ! W1014 15:46:15.152851       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	I1014 08:47:32.437178   15224 command_runner.go:130] ! W1014 15:46:15.153007       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 08:47:32.437178   15224 command_runner.go:130] ! I1014 15:46:15.250163       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1014 08:47:32.437250   15224 command_runner.go:130] ! I1014 15:46:15.250420       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:32.437250   15224 command_runner.go:130] ! I1014 15:46:15.258344       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1014 08:47:32.437319   15224 command_runner.go:130] ! I1014 15:46:15.258735       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 08:47:32.437347   15224 command_runner.go:130] ! I1014 15:46:15.263966       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 08:47:32.437380   15224 command_runner.go:130] ! I1014 15:46:15.258753       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 08:47:32.437405   15224 command_runner.go:130] ! I1014 15:46:15.365145       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 08:47:32.439893   15224 logs.go:123] Gathering logs for kube-proxy [e83db276dec3] ...
	I1014 08:47:32.439974   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e83db276dec3"
	I1014 08:47:32.471190   15224 command_runner.go:130] ! I1014 15:46:17.821967       1 server_linux.go:66] "Using iptables proxy"
	I1014 08:47:32.471190   15224 command_runner.go:130] ! E1014 15:46:17.985243       1 proxier.go:734] "Error cleaning up nftables rules" err=<
	I1014 08:47:32.471291   15224 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I1014 08:47:32.471291   15224 command_runner.go:130] ! 	add table ip kube-proxy
	I1014 08:47:32.471291   15224 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I1014 08:47:32.471291   15224 command_runner.go:130] !  >
	I1014 08:47:32.471291   15224 command_runner.go:130] ! E1014 15:46:18.020523       1 proxier.go:734] "Error cleaning up nftables rules" err=<
	I1014 08:47:32.471291   15224 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I1014 08:47:32.471354   15224 command_runner.go:130] ! 	add table ip6 kube-proxy
	I1014 08:47:32.471354   15224 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I1014 08:47:32.471354   15224 command_runner.go:130] !  >
	I1014 08:47:32.471354   15224 command_runner.go:130] ! I1014 15:46:18.173230       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.20.106.123"]
	I1014 08:47:32.471450   15224 command_runner.go:130] ! E1014 15:46:18.173392       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 08:47:32.471514   15224 command_runner.go:130] ! I1014 15:46:18.286207       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 08:47:32.471534   15224 command_runner.go:130] ! I1014 15:46:18.287289       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 08:47:32.471534   15224 command_runner.go:130] ! I1014 15:46:18.287905       1 server_linux.go:169] "Using iptables Proxier"
	I1014 08:47:32.471597   15224 command_runner.go:130] ! I1014 15:46:18.293792       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 08:47:32.471622   15224 command_runner.go:130] ! I1014 15:46:18.300740       1 server.go:483] "Version info" version="v1.31.1"
	I1014 08:47:32.471652   15224 command_runner.go:130] ! I1014 15:46:18.300778       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 08:47:32.471652   15224 command_runner.go:130] ! I1014 15:46:18.305824       1 config.go:199] "Starting service config controller"
	I1014 08:47:32.471652   15224 command_runner.go:130] ! I1014 15:46:18.308209       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 08:47:32.471652   15224 command_runner.go:130] ! I1014 15:46:18.308868       1 config.go:328] "Starting node config controller"
	I1014 08:47:32.471652   15224 command_runner.go:130] ! I1014 15:46:18.314183       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 08:47:32.471652   15224 command_runner.go:130] ! I1014 15:46:18.309398       1 config.go:105] "Starting endpoint slice config controller"
	I1014 08:47:32.471652   15224 command_runner.go:130] ! I1014 15:46:18.317842       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 08:47:32.471652   15224 command_runner.go:130] ! I1014 15:46:18.419882       1 shared_informer.go:320] Caches are synced for node config
	I1014 08:47:32.471652   15224 command_runner.go:130] ! I1014 15:46:18.419918       1 shared_informer.go:320] Caches are synced for service config
	I1014 08:47:32.471652   15224 command_runner.go:130] ! I1014 15:46:18.435586       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 08:47:32.474216   15224 logs.go:123] Gathering logs for kindnet [fcdf89a3ac8c] ...
	I1014 08:47:32.474216   15224 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcdf89a3ac8c"
	I1014 08:47:32.503247   15224 command_runner.go:130] ! I1014 15:32:44.862261       1 main.go:300] handling current node
	I1014 08:47:32.503247   15224 command_runner.go:130] ! I1014 15:32:44.862301       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.503942   15224 command_runner.go:130] ! I1014 15:32:44.862313       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.503980   15224 command_runner.go:130] ! I1014 15:32:44.862605       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.504770   15224 command_runner.go:130] ! I1014 15:32:44.862636       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.504770   15224 command_runner.go:130] ! I1014 15:32:54.862103       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.504861   15224 command_runner.go:130] ! I1014 15:32:54.862232       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.509199   15224 command_runner.go:130] ! I1014 15:32:54.862979       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.509342   15224 command_runner.go:130] ! I1014 15:32:54.863013       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.509707   15224 command_runner.go:130] ! I1014 15:32:54.863219       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:32:54.863233       1 main.go:300] handling current node
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:04.864377       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:04.864510       1 main.go:300] handling current node
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:04.864534       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:04.864544       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:04.864795       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:04.864807       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:14.870098       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:14.870279       1 main.go:300] handling current node
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:14.870319       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:14.870394       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:14.872221       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:14.872265       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:24.862168       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:24.862234       1 main.go:300] handling current node
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:24.862290       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:24.862303       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:24.862799       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.510555   15224 command_runner.go:130] ! I1014 15:33:24.862950       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.511132   15224 command_runner.go:130] ! I1014 15:33:34.870712       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.511132   15224 command_runner.go:130] ! I1014 15:33:34.870952       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.511196   15224 command_runner.go:130] ! I1014 15:33:34.871749       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.511196   15224 command_runner.go:130] ! I1014 15:33:34.871848       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.511196   15224 command_runner.go:130] ! I1014 15:33:34.872312       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.511245   15224 command_runner.go:130] ! I1014 15:33:34.872409       1 main.go:300] handling current node
	I1014 08:47:32.511245   15224 command_runner.go:130] ! I1014 15:33:44.868271       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.511245   15224 command_runner.go:130] ! I1014 15:33:44.868442       1 main.go:300] handling current node
	I1014 08:47:32.511245   15224 command_runner.go:130] ! I1014 15:33:44.868482       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.511245   15224 command_runner.go:130] ! I1014 15:33:44.868509       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.511245   15224 command_runner.go:130] ! I1014 15:33:44.869165       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.511245   15224 command_runner.go:130] ! I1014 15:33:44.869252       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.511245   15224 command_runner.go:130] ! I1014 15:33:54.862162       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.511793   15224 command_runner.go:130] ! I1014 15:33:54.862365       1 main.go:300] handling current node
	I1014 08:47:32.511793   15224 command_runner.go:130] ! I1014 15:33:54.862404       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.511985   15224 command_runner.go:130] ! I1014 15:33:54.862429       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512015   15224 command_runner.go:130] ! I1014 15:33:54.862766       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:33:54.862800       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:04.870860       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:04.870993       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:04.871751       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:04.871830       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:04.872365       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:04.872444       1 main.go:300] handling current node
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:14.868274       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:14.868410       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:14.869151       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:14.869244       1 main.go:300] handling current node
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:14.869263       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:14.869271       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:24.869326       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:24.869383       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:24.870365       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:24.870464       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:24.871197       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:24.871235       1 main.go:300] handling current node
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:34.862280       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:34.862387       1 main.go:300] handling current node
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:34.862420       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:34.862440       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:34.862809       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:34.862844       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:44.870611       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:44.870703       1 main.go:300] handling current node
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:44.870732       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:44.870826       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:44.871348       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:44.871437       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:54.862260       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:54.862358       1 main.go:300] handling current node
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:54.862379       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:54.862388       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:54.862782       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:34:54.862862       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:35:04.871418       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:35:04.871489       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:35:04.872322       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:35:04.872416       1 main.go:300] handling current node
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:35:04.872437       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:35:04.872445       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:35:14.870301       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:35:14.870413       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:35:14.870922       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:35:14.870941       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.512051   15224 command_runner.go:130] ! I1014 15:35:14.871055       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:14.871086       1 main.go:300] handling current node
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:24.870776       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:24.870814       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:24.871449       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:24.871682       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:24.872057       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:24.872149       1 main.go:300] handling current node
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:34.871155       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:34.871422       1 main.go:300] handling current node
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:34.876612       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:34.876630       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:34.876808       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:34.876817       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:44.872405       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:44.872450       1 main.go:300] handling current node
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:44.872467       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:44.872473       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:44.873120       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:44.873155       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:54.862113       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:54.862220       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:54.862608       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:54.862725       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:54.862993       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:35:54.863089       1 main.go:300] handling current node
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:04.870594       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:04.870634       1 main.go:300] handling current node
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:04.870705       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:04.870719       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:04.871246       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:04.871261       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:14.862194       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:14.862337       1 main.go:300] handling current node
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:14.862361       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:14.862370       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:14.863024       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:14.863053       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:24.870839       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:24.871114       1 main.go:300] handling current node
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:24.871303       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:24.871618       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.512847   15224 command_runner.go:130] ! I1014 15:36:24.872052       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:24.872164       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:34.870320       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:34.870375       1 main.go:300] handling current node
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:34.870396       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:34.870404       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:34.870774       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:34.870810       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:44.864305       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:44.864530       1 main.go:300] handling current node
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:44.864616       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:44.864683       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:44.865206       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:44.865241       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:54.862701       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:54.862834       1 main.go:300] handling current node
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:54.862940       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:54.863054       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:54.864321       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:36:54.864397       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:04.863761       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:04.863854       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:04.864505       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:04.864638       1 main.go:300] handling current node
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:04.864656       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:04.864664       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:14.866293       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:14.866653       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:14.867034       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:14.867067       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:14.867179       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:14.867247       1 main.go:300] handling current node
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:24.867969       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:24.868019       1 main.go:300] handling current node
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:24.868036       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:24.868043       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:24.868511       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:24.868549       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:34.863786       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:34.864224       1 main.go:300] handling current node
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:34.864384       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:34.864448       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:34.864771       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:34.864865       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:44.871310       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:44.871352       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:44.871803       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:44.871837       1 main.go:300] handling current node
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:44.871852       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:44.871859       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:54.862573       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:54.862694       1 main.go:300] handling current node
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:54.862714       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:54.862723       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:54.863288       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:37:54.863364       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:38:04.872124       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:38:04.872285       1 main.go:300] handling current node
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:38:04.872330       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:38:04.872343       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:38:04.873184       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:38:04.873352       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:38:14.863654       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:38:14.863788       1 main.go:300] handling current node
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:38:14.863812       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.513835   15224 command_runner.go:130] ! I1014 15:38:14.863822       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:14.864488       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:14.864585       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:24.868537       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:24.868643       1 main.go:300] handling current node
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:24.868664       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:24.868672       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:24.869258       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:24.869347       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:34.864233       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:34.864469       1 main.go:300] handling current node
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:34.864497       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:34.864509       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:34.865023       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:34.865061       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:44.870754       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:44.870859       1 main.go:300] handling current node
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:44.870919       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:44.870931       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:44.871124       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:44.871155       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:54.862849       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:54.863008       1 main.go:300] handling current node
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:54.863029       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:54.863040       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:54.863313       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:38:54.863343       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:04.861865       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:04.862353       1 main.go:300] handling current node
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:04.862819       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:04.863053       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:04.863648       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:04.865127       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:14.870473       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:14.870526       1 main.go:300] handling current node
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:14.870544       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:14.870551       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:14.871123       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:14.871161       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:24.862264       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:24.862304       1 main.go:300] handling current node
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:24.862323       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:24.862331       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:24.863326       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:24.863417       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:34.862868       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:34.863041       1 main.go:300] handling current node
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:34.863063       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:34.863072       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:34.863370       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:34.863460       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:44.872051       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:44.872175       1 main.go:300] handling current node
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:44.872198       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:44.872392       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:44.873038       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:44.873160       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:54.862953       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:54.862990       1 main.go:300] handling current node
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:54.863013       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:54.863022       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:54.863377       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:39:54.863412       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:40:04.864160       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:40:04.864198       1 main.go:300] handling current node
	I1014 08:47:32.514835   15224 command_runner.go:130] ! I1014 15:40:04.864216       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:04.864222       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:04.864390       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:04.864399       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:14.862864       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:14.863081       1 main.go:300] handling current node
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:14.863442       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:14.863496       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:14.864019       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:14.864052       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:24.867383       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:24.867717       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:24.868487       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:24.868619       1 main.go:300] handling current node
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:24.868640       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:24.868650       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:34.866060       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:34.866194       1 main.go:300] handling current node
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:34.866224       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:34.866240       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:34.867632       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:34.867868       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:44.875002       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:44.875336       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:44.875792       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:44.875991       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:44.876302       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:44.876531       1 main.go:300] handling current node
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:54.862504       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:54.862640       1 main.go:300] handling current node
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:54.862766       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:54.862834       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:54.863108       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:40:54.863140       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:04.863181       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:04.863304       1 main.go:300] handling current node
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:04.863326       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:04.863335       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:04.863824       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:04.863963       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:14.868270       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:14.868443       1 main.go:300] handling current node
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:14.868487       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:14.868541       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:14.868808       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:14.868843       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:24.862261       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:24.862508       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:24.863242       1 main.go:296] Handling node with IPs: map[172.20.102.16:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:24.863792       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.2.0/24] 
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:24.864172       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.515840   15224 command_runner.go:130] ! I1014 15:41:24.864327       1 main.go:300] handling current node
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:34.862294       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:34.862355       1 main.go:300] handling current node
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:34.862377       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:34.862385       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:44.862674       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:44.862799       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:44.863254       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.20.102.29 Flags: [] Table: 0 Realm: 0} 
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:44.863509       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:44.863768       1 main.go:300] handling current node
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:44.863945       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:44.864052       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:54.862083       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:54.862208       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:54.862577       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:54.862723       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:54.863005       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:41:54.863097       1 main.go:300] handling current node
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:04.870504       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:04.871039       1 main.go:300] handling current node
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:04.871167       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:04.871277       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:04.871721       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:04.871740       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:14.862252       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:14.862455       1 main.go:300] handling current node
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:14.862499       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:14.862521       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:14.863189       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:14.863224       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:24.862819       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:24.863072       1 main.go:300] handling current node
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:24.863093       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:24.863103       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:24.864093       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:24.864136       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:34.863373       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:34.863425       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:34.863670       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:34.863742       1 main.go:300] handling current node
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:34.863763       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:34.863771       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.516842   15224 command_runner.go:130] ! I1014 15:42:44.861842       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:42:44.862176       1 main.go:300] handling current node
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:42:44.862271       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:42:44.862357       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:42:44.862743       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:42:44.863009       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:42:54.863140       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:42:54.863181       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:42:54.863865       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:42:54.864051       1 main.go:300] handling current node
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:42:54.864417       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:42:54.864427       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:04.862539       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:04.862625       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:04.863289       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:04.863395       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:04.863612       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:04.863764       1 main.go:300] handling current node
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:14.871242       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:14.871727       1 main.go:300] handling current node
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:14.871818       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:14.871846       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:14.872085       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:14.872201       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:24.871405       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:24.871540       1 main.go:300] handling current node
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:24.871566       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:24.871575       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:24.871835       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:24.872193       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:34.863042       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:34.863237       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:34.863962       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:34.864059       1 main.go:300] handling current node
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:34.864077       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:34.864085       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:44.871016       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:44.871057       1 main.go:300] handling current node
	I1014 08:47:32.517899   15224 command_runner.go:130] ! I1014 15:43:44.871074       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 08:47:32.518872   15224 command_runner.go:130] ! I1014 15:43:44.871081       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 08:47:32.518872   15224 command_runner.go:130] ! I1014 15:43:44.871299       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 08:47:32.518872   15224 command_runner.go:130] ! I1014 15:43:44.871310       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 08:47:35.038442   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods
	I1014 08:47:35.038539   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:35.038539   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:35.038539   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:35.043918   15224 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 08:47:35.043918   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:35.043918   15224 round_trippers.go:580]     Audit-Id: 4ce14b73-b264-4a50-b726-0118663ea6b7
	I1014 08:47:35.043918   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:35.043918   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:35.044631   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:35.044631   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:35.044631   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:35 GMT
	I1014 08:47:35.050311   15224 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2017"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"2001","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90019 chars]
	I1014 08:47:35.054886   15224 system_pods.go:59] 12 kube-system pods found
	I1014 08:47:35.054886   15224 system_pods.go:61] "coredns-7c65d6cfc9-fs9ct" [fd736862-9e3e-4a3d-9a86-08efd2338477] Running
	I1014 08:47:35.054886   15224 system_pods.go:61] "etcd-multinode-671000" [098aece2-cb2c-470a-878a-872417e4387f] Running
	I1014 08:47:35.054886   15224 system_pods.go:61] "kindnet-5rqxq" [480b1f88-eb32-4638-9834-2be17b8d35ed] Running
	I1014 08:47:35.054886   15224 system_pods.go:61] "kindnet-rgbjf" [445ff184-85e8-4153-a3d0-a0185c4f95de] Running
	I1014 08:47:35.054886   15224 system_pods.go:61] "kindnet-wqrx6" [a508bbf9-7565-4c73-98cb-e9684985c298] Running
	I1014 08:47:35.054886   15224 system_pods.go:61] "kube-apiserver-multinode-671000" [64595feb-e6e8-4e69-a4b7-6459d15e3beb] Running
	I1014 08:47:35.054886   15224 system_pods.go:61] "kube-controller-manager-multinode-671000" [a5c7bb80-c844-476f-ba47-1cd4e599b92d] Running
	I1014 08:47:35.054886   15224 system_pods.go:61] "kube-proxy-kbpjf" [004b7f38-fa3b-4c2c-9524-8d5b1ba514e9] Running
	I1014 08:47:35.054886   15224 system_pods.go:61] "kube-proxy-n6txs" [796a44f9-2067-438d-9359-34d5f968c861] Running
	I1014 08:47:35.054886   15224 system_pods.go:61] "kube-proxy-r74dx" [f8d14473-8859-4015-84e9-d00656cc00c9] Running
	I1014 08:47:35.054886   15224 system_pods.go:61] "kube-scheduler-multinode-671000" [97febcab-f54d-4338-ba7c-2dc5e69b77fc] Running
	I1014 08:47:35.054886   15224 system_pods.go:61] "storage-provisioner" [fde8ff75-bc7f-4db4-b098-c3a08b38d205] Running
	I1014 08:47:35.054886   15224 system_pods.go:74] duration metric: took 3.7148124s to wait for pod list to return data ...
	I1014 08:47:35.054886   15224 default_sa.go:34] waiting for default service account to be created ...
	I1014 08:47:35.054886   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/default/serviceaccounts
	I1014 08:47:35.054886   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:35.054886   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:35.054886   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:35.059242   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:35.059242   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:35.059242   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:35 GMT
	I1014 08:47:35.059242   15224 round_trippers.go:580]     Audit-Id: 2923cb0a-1308-40bf-887d-7a385272b091
	I1014 08:47:35.059242   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:35.059242   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:35.059242   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:35.059242   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:35.059242   15224 round_trippers.go:580]     Content-Length: 262
	I1014 08:47:35.059242   15224 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"2017"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"2d7618c1-d4b9-4719-9d93-d87bd887238a","resourceVersion":"332","creationTimestamp":"2024-10-14T15:22:44Z"}}]}
	I1014 08:47:35.059242   15224 default_sa.go:45] found service account: "default"
	I1014 08:47:35.059242   15224 default_sa.go:55] duration metric: took 4.3564ms for default service account to be created ...
	I1014 08:47:35.059242   15224 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 08:47:35.059242   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/namespaces/kube-system/pods
	I1014 08:47:35.059242   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:35.059242   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:35.059242   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:35.064097   15224 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 08:47:35.064191   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:35.064191   15224 round_trippers.go:580]     Audit-Id: 65e353bf-4f4a-4191-843e-20cc4e13fb38
	I1014 08:47:35.064191   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:35.064191   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:35.064191   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:35.064191   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:35.064191   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:35 GMT
	I1014 08:47:35.065155   15224 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2017"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-fs9ct","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"fd736862-9e3e-4a3d-9a86-08efd2338477","resourceVersion":"2001","creationTimestamp":"2024-10-14T15:22:45Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"77174e7f-f433-4f74-83f1-207a21552e3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-10-14T15:22:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77174e7f-f433-4f74-83f1-207a21552e3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90019 chars]
	I1014 08:47:35.069236   15224 system_pods.go:86] 12 kube-system pods found
	I1014 08:47:35.069236   15224 system_pods.go:89] "coredns-7c65d6cfc9-fs9ct" [fd736862-9e3e-4a3d-9a86-08efd2338477] Running
	I1014 08:47:35.069236   15224 system_pods.go:89] "etcd-multinode-671000" [098aece2-cb2c-470a-878a-872417e4387f] Running
	I1014 08:47:35.069236   15224 system_pods.go:89] "kindnet-5rqxq" [480b1f88-eb32-4638-9834-2be17b8d35ed] Running
	I1014 08:47:35.069236   15224 system_pods.go:89] "kindnet-rgbjf" [445ff184-85e8-4153-a3d0-a0185c4f95de] Running
	I1014 08:47:35.069236   15224 system_pods.go:89] "kindnet-wqrx6" [a508bbf9-7565-4c73-98cb-e9684985c298] Running
	I1014 08:47:35.069236   15224 system_pods.go:89] "kube-apiserver-multinode-671000" [64595feb-e6e8-4e69-a4b7-6459d15e3beb] Running
	I1014 08:47:35.069236   15224 system_pods.go:89] "kube-controller-manager-multinode-671000" [a5c7bb80-c844-476f-ba47-1cd4e599b92d] Running
	I1014 08:47:35.069236   15224 system_pods.go:89] "kube-proxy-kbpjf" [004b7f38-fa3b-4c2c-9524-8d5b1ba514e9] Running
	I1014 08:47:35.069236   15224 system_pods.go:89] "kube-proxy-n6txs" [796a44f9-2067-438d-9359-34d5f968c861] Running
	I1014 08:47:35.069236   15224 system_pods.go:89] "kube-proxy-r74dx" [f8d14473-8859-4015-84e9-d00656cc00c9] Running
	I1014 08:47:35.069236   15224 system_pods.go:89] "kube-scheduler-multinode-671000" [97febcab-f54d-4338-ba7c-2dc5e69b77fc] Running
	I1014 08:47:35.069236   15224 system_pods.go:89] "storage-provisioner" [fde8ff75-bc7f-4db4-b098-c3a08b38d205] Running
	I1014 08:47:35.069236   15224 system_pods.go:126] duration metric: took 9.994ms to wait for k8s-apps to be running ...
	I1014 08:47:35.069236   15224 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 08:47:35.078776   15224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 08:47:35.103924   15224 system_svc.go:56] duration metric: took 34.6881ms WaitForService to wait for kubelet
	I1014 08:47:35.103924   15224 kubeadm.go:582] duration metric: took 1m14.4277825s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 08:47:35.103924   15224 node_conditions.go:102] verifying NodePressure condition ...
	I1014 08:47:35.103924   15224 round_trippers.go:463] GET https://172.20.106.123:8443/api/v1/nodes
	I1014 08:47:35.103924   15224 round_trippers.go:469] Request Headers:
	I1014 08:47:35.103924   15224 round_trippers.go:473]     Accept: application/json, */*
	I1014 08:47:35.103924   15224 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I1014 08:47:35.108233   15224 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 08:47:35.108276   15224 round_trippers.go:577] Response Headers:
	I1014 08:47:35.108276   15224 round_trippers.go:580]     Cache-Control: no-cache, private
	I1014 08:47:35.108276   15224 round_trippers.go:580]     Content-Type: application/json
	I1014 08:47:35.108276   15224 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 792cefc6-ea68-4297-a68c-a4c377fd1c18
	I1014 08:47:35.108276   15224 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ce6cd6a5-cd41-4790-830f-0c76bbbf6aee
	I1014 08:47:35.108276   15224 round_trippers.go:580]     Date: Mon, 14 Oct 2024 15:47:35 GMT
	I1014 08:47:35.108276   15224 round_trippers.go:580]     Audit-Id: 552f978c-3f4d-48f3-9401-2216552da7f9
	I1014 08:47:35.108276   15224 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2017"},"items":[{"metadata":{"name":"multinode-671000","uid":"f5f6e0f8-d39b-40a4-a225-e2f5be71e063","resourceVersion":"1981","creationTimestamp":"2024-10-14T15:22:36Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-671000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"f9f6c2ada6d933af9900f45012fe0fe625736c5b","minikube.k8s.io/name":"multinode-671000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_10_14T08_22_41_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16260 chars]
	I1014 08:47:35.109662   15224 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 08:47:35.109662   15224 node_conditions.go:123] node cpu capacity is 2
	I1014 08:47:35.109662   15224 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 08:47:35.109785   15224 node_conditions.go:123] node cpu capacity is 2
	I1014 08:47:35.109785   15224 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 08:47:35.109785   15224 node_conditions.go:123] node cpu capacity is 2
	I1014 08:47:35.109785   15224 node_conditions.go:105] duration metric: took 5.8607ms to run NodePressure ...
	I1014 08:47:35.109785   15224 start.go:241] waiting for startup goroutines ...
	I1014 08:47:35.109785   15224 start.go:246] waiting for cluster config update ...
	I1014 08:47:35.109785   15224 start.go:255] writing updated cluster config ...
	I1014 08:47:35.114765   15224 out.go:201] 
	I1014 08:47:35.130819   15224 config.go:182] Loaded profile config "multinode-671000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 08:47:35.130819   15224 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\config.json ...
	I1014 08:47:35.137539   15224 out.go:177] * Starting "multinode-671000-m02" worker node in "multinode-671000" cluster
	I1014 08:47:35.140093   15224 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 08:47:35.140093   15224 cache.go:56] Caching tarball of preloaded images
	I1014 08:47:35.141226   15224 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1014 08:47:35.141295   15224 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 08:47:35.141295   15224 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\config.json ...
	I1014 08:47:35.144478   15224 start.go:360] acquireMachinesLock for multinode-671000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 08:47:35.144686   15224 start.go:364] duration metric: took 139.1µs to acquireMachinesLock for "multinode-671000-m02"
	I1014 08:47:35.144879   15224 start.go:96] Skipping create...Using existing machine configuration
	I1014 08:47:35.144922   15224 fix.go:54] fixHost starting: m02
	I1014 08:47:35.145418   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:47:37.290609   15224 main.go:141] libmachine: [stdout =====>] : Off
	
	I1014 08:47:37.290668   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:47:37.290735   15224 fix.go:112] recreateIfNeeded on multinode-671000-m02: state=Stopped err=<nil>
	W1014 08:47:37.290735   15224 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 08:47:37.294756   15224 out.go:177] * Restarting existing hyperv VM for "multinode-671000-m02" ...
	I1014 08:47:37.297201   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-671000-m02
	I1014 08:47:40.963581   15224 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:47:40.963581   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:47:40.963581   15224 main.go:141] libmachine: Waiting for host to start...
	I1014 08:47:40.963676   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:47:43.167324   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:47:43.167324   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:47:43.167510   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:47:45.616967   15224 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:47:45.617759   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:47:46.617958   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:47:48.784401   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:47:48.784401   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:47:48.784401   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:47:51.314931   15224 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:47:51.314931   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:47:52.315174   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:47:54.453343   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:47:54.453343   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:47:54.453343   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:47:56.903430   15224 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:47:56.903430   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:47:57.903977   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:48:00.069797   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:48:00.069900   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:00.069975   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:48:02.538439   15224 main.go:141] libmachine: [stdout =====>] : 
	I1014 08:48:02.538439   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:03.539297   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:48:05.690154   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:48:05.690154   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:05.690154   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:48:08.243357   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:48:08.243357   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:08.248402   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:48:10.335432   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:48:10.335432   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:10.335432   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:48:12.806239   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:48:12.807330   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:12.807330   15224 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000\config.json ...
	I1014 08:48:12.810508   15224 machine.go:93] provisionDockerMachine start ...
	I1014 08:48:12.810741   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:48:14.860815   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:48:14.860815   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:14.860911   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:48:17.358192   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:48:17.359120   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:17.364419   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:48:17.365587   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.98.93 22 <nil> <nil>}
	I1014 08:48:17.365587   15224 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 08:48:17.513670   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 08:48:17.513802   15224 buildroot.go:166] provisioning hostname "multinode-671000-m02"
	I1014 08:48:17.513802   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:48:19.601895   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:48:19.602349   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:19.602521   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:48:22.082086   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:48:22.082188   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:22.088443   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:48:22.089169   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.98.93 22 <nil> <nil>}
	I1014 08:48:22.089169   15224 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-671000-m02 && echo "multinode-671000-m02" | sudo tee /etc/hostname
	I1014 08:48:22.268612   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-671000-m02
	
	I1014 08:48:22.268666   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:48:24.332882   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:48:24.333953   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:24.334091   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:48:26.831739   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:48:26.831739   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:26.838996   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:48:26.839156   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.98.93 22 <nil> <nil>}
	I1014 08:48:26.839156   15224 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-671000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-671000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-671000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 08:48:26.998901   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 08:48:26.998901   15224 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1014 08:48:26.999429   15224 buildroot.go:174] setting up certificates
	I1014 08:48:26.999523   15224 provision.go:84] configureAuth start
	I1014 08:48:26.999523   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:48:29.135569   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:48:29.136528   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:29.136614   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:48:31.694629   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:48:31.694629   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:31.695632   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:48:33.763168   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:48:33.763168   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:33.763168   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:48:36.228813   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:48:36.228997   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:36.229086   15224 provision.go:143] copyHostCerts
	I1014 08:48:36.229284   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I1014 08:48:36.229284   15224 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1014 08:48:36.229284   15224 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1014 08:48:36.230106   15224 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1014 08:48:36.231513   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I1014 08:48:36.231513   15224 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1014 08:48:36.231513   15224 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1014 08:48:36.232256   15224 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1014 08:48:36.232976   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I1014 08:48:36.232976   15224 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1014 08:48:36.233510   15224 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1014 08:48:36.233701   15224 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I1014 08:48:36.235210   15224 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-671000-m02 san=[127.0.0.1 172.20.98.93 localhost minikube multinode-671000-m02]
	I1014 08:48:36.448347   15224 provision.go:177] copyRemoteCerts
	I1014 08:48:36.458837   15224 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 08:48:36.458837   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:48:38.495078   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:48:38.495078   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:38.495078   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:48:40.956097   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:48:40.956097   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:40.956829   15224 sshutil.go:53] new ssh client: &{IP:172.20.98.93 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000-m02\id_rsa Username:docker}
	I1014 08:48:41.073415   15224 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6145696s)
	I1014 08:48:41.073477   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1014 08:48:41.073550   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I1014 08:48:41.126083   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1014 08:48:41.126664   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 08:48:41.180628   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1014 08:48:41.181202   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 08:48:41.232346   15224 provision.go:87] duration metric: took 14.2327966s to configureAuth
	I1014 08:48:41.232346   15224 buildroot.go:189] setting minikube options for container-runtime
	I1014 08:48:41.233398   15224 config.go:182] Loaded profile config "multinode-671000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 08:48:41.233492   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:48:43.314059   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:48:43.314614   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:43.314614   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:48:45.784112   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:48:45.787503   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:45.792289   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:48:45.792289   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.98.93 22 <nil> <nil>}
	I1014 08:48:45.792289   15224 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1014 08:48:45.937059   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1014 08:48:45.937059   15224 buildroot.go:70] root file system type: tmpfs
	I1014 08:48:45.937312   15224 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1014 08:48:45.937312   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:48:48.024204   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:48:48.025031   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:48.025031   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:48:50.547062   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:48:50.547062   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:50.553187   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:48:50.554026   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.98.93 22 <nil> <nil>}
	I1014 08:48:50.554026   15224 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.20.106.123"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1014 08:48:50.726180   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.20.106.123
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1014 08:48:50.726334   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:48:52.810701   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:48:52.811129   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:52.811129   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:48:55.282514   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:48:55.282721   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:55.287507   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:48:55.288280   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.98.93 22 <nil> <nil>}
	I1014 08:48:55.288280   15224 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1014 08:48:57.647303   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1014 08:48:57.647413   15224 machine.go:96] duration metric: took 44.8366885s to provisionDockerMachine
	I1014 08:48:57.647486   15224 start.go:293] postStartSetup for "multinode-671000-m02" (driver="hyperv")
	I1014 08:48:57.647486   15224 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 08:48:57.659006   15224 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 08:48:57.659006   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:48:59.718197   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:48:59.718513   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:48:59.718625   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:49:02.162772   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:49:02.162772   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:49:02.162772   15224 sshutil.go:53] new ssh client: &{IP:172.20.98.93 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000-m02\id_rsa Username:docker}
	I1014 08:49:02.268225   15224 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.609128s)
	I1014 08:49:02.280986   15224 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 08:49:02.287775   15224 command_runner.go:130] > NAME=Buildroot
	I1014 08:49:02.287775   15224 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1014 08:49:02.287878   15224 command_runner.go:130] > ID=buildroot
	I1014 08:49:02.287878   15224 command_runner.go:130] > VERSION_ID=2023.02.9
	I1014 08:49:02.287878   15224 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1014 08:49:02.287960   15224 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 08:49:02.288032   15224 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1014 08:49:02.288449   15224 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1014 08:49:02.289395   15224 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> 9362.pem in /etc/ssl/certs
	I1014 08:49:02.289471   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /etc/ssl/certs/9362.pem
	I1014 08:49:02.299493   15224 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 08:49:02.318762   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /etc/ssl/certs/9362.pem (1708 bytes)
	I1014 08:49:02.369430   15224 start.go:296] duration metric: took 4.7219357s for postStartSetup
	I1014 08:49:02.369585   15224 fix.go:56] duration metric: took 1m27.2245073s for fixHost
	I1014 08:49:02.369690   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:49:04.451777   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:49:04.451777   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:49:04.451777   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:49:06.926197   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:49:06.926719   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:49:06.931668   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:49:06.931802   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.98.93 22 <nil> <nil>}
	I1014 08:49:06.931802   15224 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 08:49:07.067443   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728920947.067354943
	
	I1014 08:49:07.067443   15224 fix.go:216] guest clock: 1728920947.067354943
	I1014 08:49:07.067568   15224 fix.go:229] Guest: 2024-10-14 08:49:07.067354943 -0700 PDT Remote: 2024-10-14 08:49:02.3695854 -0700 PDT m=+295.072045601 (delta=4.697769543s)
	I1014 08:49:07.067568   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:49:09.200501   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:49:09.200501   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:49:09.200501   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:49:11.705026   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:49:11.705026   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:49:11.711643   15224 main.go:141] libmachine: Using SSH client type: native
	I1014 08:49:11.711835   15224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.98.93 22 <nil> <nil>}
	I1014 08:49:11.711835   15224 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1728920947
	I1014 08:49:11.869653   15224 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 14 15:49:07 UTC 2024
	
	I1014 08:49:11.869653   15224 fix.go:236] clock set: Mon Oct 14 15:49:07 UTC 2024
	 (err=<nil>)
	I1014 08:49:11.869653   15224 start.go:83] releasing machines lock for "multinode-671000-m02", held for 1m36.7247385s
	I1014 08:49:11.870308   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:49:13.957633   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:49:13.957727   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:49:13.958042   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:49:16.447721   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:49:16.447875   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:49:16.450419   15224 out.go:177] * Found network options:
	I1014 08:49:16.452580   15224 out.go:177]   - NO_PROXY=172.20.106.123
	W1014 08:49:16.455109   15224 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 08:49:16.458000   15224 out.go:177]   - NO_PROXY=172.20.106.123
	W1014 08:49:16.460379   15224 proxy.go:119] fail to check proxy env: Error ip not in block
	W1014 08:49:16.461192   15224 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 08:49:16.463479   15224 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1014 08:49:16.464095   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:49:16.474410   15224 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1014 08:49:16.475530   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:49:18.616922   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:49:18.617561   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:49:18.617704   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:49:18.649602   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:49:18.649602   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:49:18.649742   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:49:21.267672   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:49:21.267672   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:49:21.268606   15224 sshutil.go:53] new ssh client: &{IP:172.20.98.93 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000-m02\id_rsa Username:docker}
	I1014 08:49:21.294372   15224 main.go:141] libmachine: [stdout =====>] : 172.20.98.93
	
	I1014 08:49:21.294372   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:49:21.295072   15224 sshutil.go:53] new ssh client: &{IP:172.20.98.93 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000-m02\id_rsa Username:docker}
	I1014 08:49:21.370611   15224 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I1014 08:49:21.371460   15224 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.895748s)
	W1014 08:49:21.371460   15224 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 08:49:21.383748   15224 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 08:49:21.388705   15224 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I1014 08:49:21.388705   15224 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9252169s)
	W1014 08:49:21.388705   15224 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1014 08:49:21.417424   15224 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1014 08:49:21.417566   15224 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 08:49:21.417566   15224 start.go:495] detecting cgroup driver to use...
	I1014 08:49:21.417992   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 08:49:21.457131   15224 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1014 08:49:21.468921   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1014 08:49:21.501488   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W1014 08:49:21.501738   15224 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1014 08:49:21.501888   15224 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1014 08:49:21.526933   15224 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 08:49:21.537940   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 08:49:21.570189   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 08:49:21.604750   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 08:49:21.636378   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 08:49:21.666973   15224 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 08:49:21.699577   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 08:49:21.732318   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 08:49:21.763082   15224 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 08:49:21.795815   15224 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 08:49:21.816435   15224 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 08:49:21.816704   15224 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 08:49:21.828042   15224 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 08:49:21.860032   15224 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 08:49:21.889832   15224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:49:22.097884   15224 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 08:49:22.134711   15224 start.go:495] detecting cgroup driver to use...
	I1014 08:49:22.147519   15224 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1014 08:49:22.172688   15224 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I1014 08:49:22.172853   15224 command_runner.go:130] > [Unit]
	I1014 08:49:22.172853   15224 command_runner.go:130] > Description=Docker Application Container Engine
	I1014 08:49:22.172853   15224 command_runner.go:130] > Documentation=https://docs.docker.com
	I1014 08:49:22.172853   15224 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I1014 08:49:22.172853   15224 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I1014 08:49:22.172853   15224 command_runner.go:130] > StartLimitBurst=3
	I1014 08:49:22.172941   15224 command_runner.go:130] > StartLimitIntervalSec=60
	I1014 08:49:22.172941   15224 command_runner.go:130] > [Service]
	I1014 08:49:22.172980   15224 command_runner.go:130] > Type=notify
	I1014 08:49:22.172980   15224 command_runner.go:130] > Restart=on-failure
	I1014 08:49:22.172980   15224 command_runner.go:130] > Environment=NO_PROXY=172.20.106.123
	I1014 08:49:22.172980   15224 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1014 08:49:22.172980   15224 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1014 08:49:22.172980   15224 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1014 08:49:22.172980   15224 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1014 08:49:22.172980   15224 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1014 08:49:22.172980   15224 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1014 08:49:22.172980   15224 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1014 08:49:22.172980   15224 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1014 08:49:22.172980   15224 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1014 08:49:22.172980   15224 command_runner.go:130] > ExecStart=
	I1014 08:49:22.172980   15224 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I1014 08:49:22.172980   15224 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1014 08:49:22.172980   15224 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1014 08:49:22.172980   15224 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1014 08:49:22.172980   15224 command_runner.go:130] > LimitNOFILE=infinity
	I1014 08:49:22.172980   15224 command_runner.go:130] > LimitNPROC=infinity
	I1014 08:49:22.172980   15224 command_runner.go:130] > LimitCORE=infinity
	I1014 08:49:22.172980   15224 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1014 08:49:22.172980   15224 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1014 08:49:22.172980   15224 command_runner.go:130] > TasksMax=infinity
	I1014 08:49:22.172980   15224 command_runner.go:130] > TimeoutStartSec=0
	I1014 08:49:22.172980   15224 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1014 08:49:22.172980   15224 command_runner.go:130] > Delegate=yes
	I1014 08:49:22.172980   15224 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1014 08:49:22.172980   15224 command_runner.go:130] > KillMode=process
	I1014 08:49:22.172980   15224 command_runner.go:130] > [Install]
	I1014 08:49:22.172980   15224 command_runner.go:130] > WantedBy=multi-user.target
	I1014 08:49:22.185260   15224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 08:49:22.218868   15224 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 08:49:22.262544   15224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 08:49:22.302232   15224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 08:49:22.342680   15224 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 08:49:22.409079   15224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 08:49:22.435020   15224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 08:49:22.471187   15224 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1014 08:49:22.485048   15224 ssh_runner.go:195] Run: which cri-dockerd
	I1014 08:49:22.492677   15224 command_runner.go:130] > /usr/bin/cri-dockerd
	I1014 08:49:22.507368   15224 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1014 08:49:22.526949   15224 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1014 08:49:22.569062   15224 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1014 08:49:22.771125   15224 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1014 08:49:22.958430   15224 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1014 08:49:22.958552   15224 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1014 08:49:23.003427   15224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:49:23.194136   15224 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 08:49:25.856429   15224 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6622885s)
	I1014 08:49:25.867684   15224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1014 08:49:25.901859   15224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 08:49:25.939885   15224 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1014 08:49:26.143412   15224 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1014 08:49:26.354688   15224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:49:26.559829   15224 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1014 08:49:26.603222   15224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1014 08:49:26.644145   15224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:49:26.861679   15224 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1014 08:49:26.972510   15224 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1014 08:49:26.984161   15224 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1014 08:49:26.993907   15224 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1014 08:49:26.993974   15224 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1014 08:49:26.993974   15224 command_runner.go:130] > Device: 0,22	Inode: 849         Links: 1
	I1014 08:49:26.993974   15224 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I1014 08:49:26.993974   15224 command_runner.go:130] > Access: 2024-10-14 15:49:26.887253131 +0000
	I1014 08:49:26.993974   15224 command_runner.go:130] > Modify: 2024-10-14 15:49:26.887253131 +0000
	I1014 08:49:26.993974   15224 command_runner.go:130] > Change: 2024-10-14 15:49:26.890253139 +0000
	I1014 08:49:26.994063   15224 command_runner.go:130] >  Birth: -
	I1014 08:49:26.994063   15224 start.go:563] Will wait 60s for crictl version
	I1014 08:49:27.005213   15224 ssh_runner.go:195] Run: which crictl
	I1014 08:49:27.011904   15224 command_runner.go:130] > /usr/bin/crictl
	I1014 08:49:27.022689   15224 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 08:49:27.088510   15224 command_runner.go:130] > Version:  0.1.0
	I1014 08:49:27.089329   15224 command_runner.go:130] > RuntimeName:  docker
	I1014 08:49:27.089329   15224 command_runner.go:130] > RuntimeVersion:  27.3.1
	I1014 08:49:27.089444   15224 command_runner.go:130] > RuntimeApiVersion:  v1
	I1014 08:49:27.089444   15224 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I1014 08:49:27.099805   15224 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 08:49:27.135610   15224 command_runner.go:130] > 27.3.1
	I1014 08:49:27.147639   15224 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1014 08:49:27.184703   15224 command_runner.go:130] > 27.3.1
	I1014 08:49:27.189196   15224 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I1014 08:49:27.192727   15224 out.go:177]   - env NO_PROXY=172.20.106.123
	I1014 08:49:27.195474   15224 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I1014 08:49:27.200175   15224 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I1014 08:49:27.200175   15224 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I1014 08:49:27.200175   15224 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I1014 08:49:27.200175   15224 ip.go:211] Found interface: {Index:13 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:08:65:4d Flags:up|broadcast|multicast|running}
	I1014 08:49:27.203813   15224 ip.go:214] interface addr: fe80::b548:ff79:d464:75b8/64
	I1014 08:49:27.203813   15224 ip.go:214] interface addr: 172.20.96.1/20
	I1014 08:49:27.216037   15224 ssh_runner.go:195] Run: grep 172.20.96.1	host.minikube.internal$ /etc/hosts
	I1014 08:49:27.221597   15224 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.96.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 08:49:27.241964   15224 mustload.go:65] Loading cluster: multinode-671000
	I1014 08:49:27.242596   15224 config.go:182] Loaded profile config "multinode-671000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 08:49:27.243307   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:49:29.280867   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:49:29.281034   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:49:29.281034   15224 host.go:66] Checking if "multinode-671000" exists ...
	I1014 08:49:29.281682   15224 certs.go:68] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-671000 for IP: 172.20.98.93
	I1014 08:49:29.281682   15224 certs.go:194] generating shared ca certs ...
	I1014 08:49:29.281765   15224 certs.go:226] acquiring lock for ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 08:49:29.282412   15224 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I1014 08:49:29.282412   15224 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I1014 08:49:29.282412   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 08:49:29.283098   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1014 08:49:29.283098   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 08:49:29.283098   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 08:49:29.283837   15224 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem (1338 bytes)
	W1014 08:49:29.284029   15224 certs.go:480] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936_empty.pem, impossibly tiny 0 bytes
	I1014 08:49:29.284156   15224 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1014 08:49:29.284504   15224 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I1014 08:49:29.284504   15224 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1014 08:49:29.285036   15224 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I1014 08:49:29.285239   15224 certs.go:484] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem (1708 bytes)
	I1014 08:49:29.285955   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 08:49:29.286137   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem -> /usr/share/ca-certificates/936.pem
	I1014 08:49:29.286137   15224 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> /usr/share/ca-certificates/9362.pem
	I1014 08:49:29.286137   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 08:49:29.337430   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1014 08:49:29.383833   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 08:49:29.438248   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 08:49:29.486028   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 08:49:29.532209   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\936.pem --> /usr/share/ca-certificates/936.pem (1338 bytes)
	I1014 08:49:29.578861   15224 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /usr/share/ca-certificates/9362.pem (1708 bytes)
	I1014 08:49:29.638595   15224 ssh_runner.go:195] Run: openssl version
	I1014 08:49:29.648558   15224 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1014 08:49:29.661385   15224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9362.pem && ln -fs /usr/share/ca-certificates/9362.pem /etc/ssl/certs/9362.pem"
	I1014 08:49:29.697094   15224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9362.pem
	I1014 08:49:29.705066   15224 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 14 14:00 /usr/share/ca-certificates/9362.pem
	I1014 08:49:29.705066   15224 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 14:00 /usr/share/ca-certificates/9362.pem
	I1014 08:49:29.717851   15224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9362.pem
	I1014 08:49:29.725980   15224 command_runner.go:130] > 3ec20f2e
	I1014 08:49:29.737673   15224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9362.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 08:49:29.768028   15224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 08:49:29.799670   15224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 08:49:29.808393   15224 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 14 13:44 /usr/share/ca-certificates/minikubeCA.pem
	I1014 08:49:29.808393   15224 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:44 /usr/share/ca-certificates/minikubeCA.pem
	I1014 08:49:29.820216   15224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 08:49:29.829712   15224 command_runner.go:130] > b5213941
	I1014 08:49:29.843328   15224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 08:49:29.877150   15224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/936.pem && ln -fs /usr/share/ca-certificates/936.pem /etc/ssl/certs/936.pem"
	I1014 08:49:29.910960   15224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/936.pem
	I1014 08:49:29.918146   15224 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 14 14:00 /usr/share/ca-certificates/936.pem
	I1014 08:49:29.918275   15224 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 14:00 /usr/share/ca-certificates/936.pem
	I1014 08:49:29.930357   15224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/936.pem
	I1014 08:49:29.939713   15224 command_runner.go:130] > 51391683
	I1014 08:49:29.953152   15224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/936.pem /etc/ssl/certs/51391683.0"
	I1014 08:49:29.988633   15224 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 08:49:29.996061   15224 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 08:49:29.996061   15224 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 08:49:29.996061   15224 kubeadm.go:934] updating node {m02 172.20.98.93 8443 v1.31.1 docker false true} ...
	I1014 08:49:29.996596   15224 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-671000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.20.98.93
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 08:49:30.008648   15224 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 08:49:30.049887   15224 command_runner.go:130] > kubeadm
	I1014 08:49:30.049887   15224 command_runner.go:130] > kubectl
	I1014 08:49:30.049887   15224 command_runner.go:130] > kubelet
	I1014 08:49:30.049887   15224 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 08:49:30.067908   15224 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1014 08:49:30.109411   15224 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1014 08:49:30.149254   15224 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 08:49:30.192417   15224 ssh_runner.go:195] Run: grep 172.20.106.123	control-plane.minikube.internal$ /etc/hosts
	I1014 08:49:30.198430   15224 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.20.106.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 08:49:30.229076   15224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 08:49:30.429839   15224 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 08:49:30.459268   15224 host.go:66] Checking if "multinode-671000" exists ...
	I1014 08:49:30.460038   15224 start.go:317] joinCluster: &{Name:multinode-671000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-671000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.106.123 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.20.98.93 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.20.102.29 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false in
gress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 08:49:30.460038   15224 start.go:330] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.20.98.93 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1014 08:49:30.460038   15224 host.go:66] Checking if "multinode-671000-m02" exists ...
	I1014 08:49:30.460778   15224 mustload.go:65] Loading cluster: multinode-671000
	I1014 08:49:30.461477   15224 config.go:182] Loaded profile config "multinode-671000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 08:49:30.462235   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:49:32.581996   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:49:32.581996   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:49:32.581996   15224 host.go:66] Checking if "multinode-671000" exists ...
	I1014 08:49:32.582955   15224 api_server.go:166] Checking apiserver status ...
	I1014 08:49:32.594452   15224 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 08:49:32.594452   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:49:34.682724   15224 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:49:34.682724   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:49:34.683409   15224 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:49:37.153707   15224 main.go:141] libmachine: [stdout =====>] : 172.20.106.123
	
	I1014 08:49:37.153900   15224 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:49:37.153900   15224 sshutil.go:53] new ssh client: &{IP:172.20.106.123 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000\id_rsa Username:docker}
	I1014 08:49:37.261414   15224 command_runner.go:130] > 1906
	I1014 08:49:37.261482   15224 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.667021s)
	I1014 08:49:37.273215   15224 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1906/cgroup
	W1014 08:49:37.293737   15224 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1906/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1014 08:49:37.306060   15224 ssh_runner.go:195] Run: ls
	I1014 08:49:37.314736   15224 api_server.go:253] Checking apiserver healthz at https://172.20.106.123:8443/healthz ...
	I1014 08:49:37.323952   15224 api_server.go:279] https://172.20.106.123:8443/healthz returned 200:
	ok
	I1014 08:49:37.334192   15224 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl drain multinode-671000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	
	
	==> Docker <==
	Oct 14 15:46:47 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:47.559859508Z" level=info msg="shim disconnected" id=c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6 namespace=moby
	Oct 14 15:46:47 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:47.560270512Z" level=warning msg="cleaning up after shim disconnected" id=c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6 namespace=moby
	Oct 14 15:46:47 multinode-671000 dockerd[1093]: time="2024-10-14T15:46:47.560505714Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 14 15:47:03 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:03.070959923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 15:47:03 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:03.071176624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 15:47:03 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:03.071240924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 15:47:03 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:03.071756926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.071716036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.071943436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.071968036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.072116937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 15:47:20 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:47:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/429b989a1a986d23a2e5aee0de1aef1e683a014bebb587981622bd80a3ac5221/resolv.conf as [nameserver 172.20.96.1]"
	Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.295865797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.295993998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.296019698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.296117898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 15:47:20 multinode-671000 cri-dockerd[1356]: time="2024-10-14T15:47:20Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9092d17516eb35243fd461a360605e738727838ee50f870f3bd6c290fd061d20/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.536751498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.537062099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.537100499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.537246499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.821494873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.821592273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.821611273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 14 15:47:20 multinode-671000 dockerd[1093]: time="2024-10-14T15:47:20.821730874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1adddc667bd90       8c811b4aec35f                                                                                         2 minutes ago       Running             busybox                   1                   9092d17516eb3       busybox-7dff88458-vlp7j
	5d223e2e64fcd       c69fa2e9cbf5f                                                                                         2 minutes ago       Running             coredns                   1                   429b989a1a986       coredns-7c65d6cfc9-fs9ct
	9d526b02ee41c       6e38f40d628db                                                                                         3 minutes ago       Running             storage-provisioner       2                   cdcdd532ba136       storage-provisioner
	bba035362eb97       3a5bc24055c9e                                                                                         3 minutes ago       Running             kindnet-cni               1                   7bcadf1f0885f       kindnet-wqrx6
	c76c258568107       6e38f40d628db                                                                                         3 minutes ago       Exited              storage-provisioner       1                   cdcdd532ba136       storage-provisioner
	e83db276dec37       60c005f310ff3                                                                                         3 minutes ago       Running             kube-proxy                1                   6f8bdf552734e       kube-proxy-r74dx
	48c8492e231e1       2e96e5913fc06                                                                                         3 minutes ago       Running             etcd                      0                   0697a11790e80       etcd-multinode-671000
	8af48c446f7e1       175ffd71cce3d                                                                                         3 minutes ago       Running             kube-controller-manager   1                   7bd4c36606eef       kube-controller-manager-multinode-671000
	a834664fc8b80       6bab7719df100                                                                                         3 minutes ago       Running             kube-apiserver            0                   6155e8be2d5d7       kube-apiserver-multinode-671000
	d428685276e1e       9aa1fad941575                                                                                         3 minutes ago       Running             kube-scheduler            1                   1d3033f871fb1       kube-scheduler-multinode-671000
	cbf0b40e378b9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Exited              busybox                   0                   06e529266db4b       busybox-7dff88458-vlp7j
	d9831e9f8ce8c       c69fa2e9cbf5f                                                                                         27 minutes ago      Exited              coredns                   0                   2f8cc9a218fef       coredns-7c65d6cfc9-fs9ct
	fcdf89a3ac8ce       kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387              27 minutes ago      Exited              kindnet-cni               0                   5e48ddcfdf90a       kindnet-wqrx6
	ea19428d70363       60c005f310ff3                                                                                         27 minutes ago      Exited              kube-proxy                0                   7144d8ce208cf       kube-proxy-r74dx
	661e75bbf6b46       9aa1fad941575                                                                                         27 minutes ago      Exited              kube-scheduler            0                   2dc78387553ff       kube-scheduler-multinode-671000
	712aad669c9f6       175ffd71cce3d                                                                                         27 minutes ago      Exited              kube-controller-manager   0                   bfdde08319e32       kube-controller-manager-multinode-671000
	
	
	==> coredns [5d223e2e64fc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6020d6b23d8ab86e45d1d2aab12a43bd19ffd1800c3e6fd0a66779be525e59cd5618fbe141b55ee3a43e920652bc72e7efafa696e55e63b2f614e5fc80dabe10
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:42996 - 9104 "HINFO IN 5434967794797104596.5472118418078127170. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.148386647s
	
	
	==> coredns [d9831e9f8ce8] <==
	[INFO] 10.244.0.3:41090 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001329s
	[INFO] 10.244.0.3:45330 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001397s
	[INFO] 10.244.0.3:46520 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000154699s
	[INFO] 10.244.0.3:40342 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0001347s
	[INFO] 10.244.0.3:60385 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0001518s
	[INFO] 10.244.0.3:56338 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001527s
	[INFO] 10.244.0.3:50480 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0002505s
	[INFO] 10.244.1.2:60726 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002297s
	[INFO] 10.244.1.2:44347 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002249s
	[INFO] 10.244.1.2:49092 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001075s
	[INFO] 10.244.1.2:54937 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068s
	[INFO] 10.244.0.3:59749 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002552s
	[INFO] 10.244.0.3:56008 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0002499s
	[INFO] 10.244.0.3:60338 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0001065s
	[INFO] 10.244.0.3:49712 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001634s
	[INFO] 10.244.1.2:56508 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001577s
	[INFO] 10.244.1.2:45387 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001545s
	[INFO] 10.244.1.2:39608 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0001419s
	[INFO] 10.244.1.2:53878 - 5 "PTR IN 1.96.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0001923s
	[INFO] 10.244.0.3:58589 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0002828s
	[INFO] 10.244.0.3:58608 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.0001609s
	[INFO] 10.244.0.3:52599 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000633s
	[INFO] 10.244.0.3:58233 - 5 "PTR IN 1.96.20.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.0001058s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-671000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-671000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=multinode-671000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_14T08_22_41_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 15:22:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-671000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 15:49:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 15:46:59 +0000   Mon, 14 Oct 2024 15:22:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 15:46:59 +0000   Mon, 14 Oct 2024 15:22:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 15:46:59 +0000   Mon, 14 Oct 2024 15:22:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 15:46:59 +0000   Mon, 14 Oct 2024 15:46:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.20.106.123
	  Hostname:    multinode-671000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 fc389f3b9e2846b4b909cfc8e7984541
	  System UUID:                f72bc210-2ad8-1e4f-ad7d-3e75159c3d98
	  Boot ID:                    98d09a99-1eff-402d-837f-6cacdc4463d7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-vlp7j                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 coredns-7c65d6cfc9-fs9ct                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     27m
	  kube-system                 etcd-multinode-671000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         3m52s
	  kube-system                 kindnet-wqrx6                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      27m
	  kube-system                 kube-apiserver-multinode-671000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m54s
	  kube-system                 kube-controller-manager-multinode-671000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-proxy-r74dx                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-scheduler-multinode-671000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                 From             Message
	  ----    ------                   ----                ----             -------
	  Normal  Starting                 27m                 kube-proxy       
	  Normal  Starting                 3m51s               kube-proxy       
	  Normal  Starting                 27m                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)   kubelet          Node multinode-671000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)   kubelet          Node multinode-671000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)   kubelet          Node multinode-671000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    27m                 kubelet          Node multinode-671000 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  27m                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m                 kubelet          Node multinode-671000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     27m                 kubelet          Node multinode-671000 status is now: NodeHasSufficientPID
	  Normal  Starting                 27m                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           27m                 node-controller  Node multinode-671000 event: Registered Node multinode-671000 in Controller
	  Normal  NodeReady                27m                 kubelet          Node multinode-671000 status is now: NodeReady
	  Normal  Starting                 4m                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m59s (x8 over 4m)  kubelet          Node multinode-671000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m59s (x8 over 4m)  kubelet          Node multinode-671000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m59s (x7 over 4m)  kubelet          Node multinode-671000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m51s               node-controller  Node multinode-671000 event: Registered Node multinode-671000 in Controller
	
	
	Name:               multinode-671000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-671000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=multinode-671000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_14T08_25_50_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 15:25:49 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	                    node.kubernetes.io/unschedulable:NoSchedule
	Unschedulable:      true
	Lease:
	  HolderIdentity:  multinode-671000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 15:43:00 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 14 Oct 2024 15:42:08 +0000   Mon, 14 Oct 2024 15:43:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 14 Oct 2024 15:42:08 +0000   Mon, 14 Oct 2024 15:43:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 14 Oct 2024 15:42:08 +0000   Mon, 14 Oct 2024 15:43:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 14 Oct 2024 15:42:08 +0000   Mon, 14 Oct 2024 15:43:45 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.20.109.137
	  Hostname:    multinode-671000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 d57a3470ff3c4f03abf025a19c5c23d9
	  System UUID:                d36e677f-0d87-a94f-af28-d8324326f88f
	  Boot ID:                    33649cab-156e-4789-8adf-a6fc060b3de7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-bnqj6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kindnet-rgbjf              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	  kube-system                 kube-proxy-kbpjf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24m                kube-proxy       
	  Normal  NodeHasSufficientMemory  24m (x2 over 24m)  kubelet          Node multinode-671000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x2 over 24m)  kubelet          Node multinode-671000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x2 over 24m)  kubelet          Node multinode-671000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           24m                node-controller  Node multinode-671000-m02 event: Registered Node multinode-671000-m02 in Controller
	  Normal  NodeReady                23m                kubelet          Node multinode-671000-m02 status is now: NodeReady
	  Normal  NodeNotReady             6m24s              node-controller  Node multinode-671000-m02 status is now: NodeNotReady
	  Normal  RegisteredNode           3m51s              node-controller  Node multinode-671000-m02 event: Registered Node multinode-671000-m02 in Controller
	
	
	Name:               multinode-671000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-671000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=multinode-671000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_14T08_41_35_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 15:41:34 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-671000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 15:42:46 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 14 Oct 2024 15:41:53 +0000   Mon, 14 Oct 2024 15:43:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 14 Oct 2024 15:41:53 +0000   Mon, 14 Oct 2024 15:43:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 14 Oct 2024 15:41:53 +0000   Mon, 14 Oct 2024 15:43:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 14 Oct 2024 15:41:53 +0000   Mon, 14 Oct 2024 15:43:29 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.20.102.29
	  Hostname:    multinode-671000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6da8cf5e96c04d55b9129d0893534bf2
	  System UUID:                49616488-815a-3f43-8f47-13dbf29b6ca7
	  Boot ID:                    d9fe58fb-ac8e-4430-9563-1b3e9fd35ffd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5rqxq       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      19m
	  kube-system                 kube-proxy-n6txs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m31s                  kube-proxy       
	  Normal  Starting                 19m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  19m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m (x2 over 19m)      kubelet          Node multinode-671000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x2 over 19m)      kubelet          Node multinode-671000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x2 over 19m)      kubelet          Node multinode-671000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                19m                    kubelet          Node multinode-671000-m03 status is now: NodeReady
	  Normal  Starting                 8m35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m35s (x2 over 8m35s)  kubelet          Node multinode-671000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m35s (x2 over 8m35s)  kubelet          Node multinode-671000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m35s (x2 over 8m35s)  kubelet          Node multinode-671000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m30s                  node-controller  Node multinode-671000-m03 event: Registered Node multinode-671000-m03 in Controller
	  Normal  NodeReady                8m16s                  kubelet          Node multinode-671000-m03 status is now: NodeReady
	  Normal  NodeNotReady             6m40s                  node-controller  Node multinode-671000-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           3m51s                  node-controller  Node multinode-671000-m03 event: Registered Node multinode-671000-m03 in Controller
	
	
	==> dmesg <==
	              * this clock source is slow. Consider trying other clock sources
	[  +5.764502] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.701221] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.823727] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.351082] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct14 15:45] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.175163] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[ +26.061812] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	[  +0.098944] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.531295] systemd-fstab-generator[1053]: Ignoring "noauto" option for root device
	[Oct14 15:46] systemd-fstab-generator[1065]: Ignoring "noauto" option for root device
	[  +0.229472] systemd-fstab-generator[1079]: Ignoring "noauto" option for root device
	[  +2.943333] systemd-fstab-generator[1310]: Ignoring "noauto" option for root device
	[  +0.192845] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.209914] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	[  +0.290916] systemd-fstab-generator[1348]: Ignoring "noauto" option for root device
	[  +0.928050] systemd-fstab-generator[1473]: Ignoring "noauto" option for root device
	[  +0.103044] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.884891] systemd-fstab-generator[1614]: Ignoring "noauto" option for root device
	[  +1.232270] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.880292] kauditd_printk_skb: 30 callbacks suppressed
	[  +4.216972] systemd-fstab-generator[2460]: Ignoring "noauto" option for root device
	[ +15.813728] kauditd_printk_skb: 72 callbacks suppressed
	
	
	==> etcd [48c8492e231e] <==
	{"level":"info","ts":"2024-10-14T15:46:12.049262Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2dcbff584edb18cc","local-member-id":"782c48cbdf98397b","added-peer-id":"782c48cbdf98397b","added-peer-peer-urls":["https://172.20.100.167:2380"]}
	{"level":"info","ts":"2024-10-14T15:46:12.049694Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2dcbff584edb18cc","local-member-id":"782c48cbdf98397b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T15:46:12.049815Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T15:46:12.056204Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T15:46:12.062166Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-14T15:46:12.062574Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"782c48cbdf98397b","initial-advertise-peer-urls":["https://172.20.106.123:2380"],"listen-peer-urls":["https://172.20.106.123:2380"],"advertise-client-urls":["https://172.20.106.123:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.20.106.123:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-14T15:46:12.062654Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-14T15:46:12.062749Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"172.20.106.123:2380"}
	{"level":"info","ts":"2024-10-14T15:46:12.062764Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"172.20.106.123:2380"}
	{"level":"info","ts":"2024-10-14T15:46:13.489231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-14T15:46:13.489328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-14T15:46:13.489358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b received MsgPreVoteResp from 782c48cbdf98397b at term 2"}
	{"level":"info","ts":"2024-10-14T15:46:13.489374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b became candidate at term 3"}
	{"level":"info","ts":"2024-10-14T15:46:13.489385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b received MsgVoteResp from 782c48cbdf98397b at term 3"}
	{"level":"info","ts":"2024-10-14T15:46:13.489395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"782c48cbdf98397b became leader at term 3"}
	{"level":"info","ts":"2024-10-14T15:46:13.489487Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 782c48cbdf98397b elected leader 782c48cbdf98397b at term 3"}
	{"level":"info","ts":"2024-10-14T15:46:13.496949Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T15:46:13.496902Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"782c48cbdf98397b","local-member-attributes":"{Name:multinode-671000 ClientURLs:[https://172.20.106.123:2379]}","request-path":"/0/members/782c48cbdf98397b/attributes","cluster-id":"2dcbff584edb18cc","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-14T15:46:13.497822Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T15:46:13.499586Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-14T15:46:13.499631Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-14T15:46:13.500815Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T15:46:13.502392Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.20.106.123:2379"}
	{"level":"info","ts":"2024-10-14T15:46:13.503879Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T15:46:13.505686Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 15:50:09 up 5 min,  0 users,  load average: 0.56, 0.35, 0.15
	Linux multinode-671000 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [bba035362eb9] <==
	I1014 15:49:28.923250       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 15:49:38.921624       1 main.go:296] Handling node with IPs: map[172.20.106.123:{}]
	I1014 15:49:38.921720       1 main.go:300] handling current node
	I1014 15:49:38.921743       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 15:49:38.921836       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 15:49:38.922481       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 15:49:38.922545       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 15:49:48.921579       1 main.go:296] Handling node with IPs: map[172.20.106.123:{}]
	I1014 15:49:48.921795       1 main.go:300] handling current node
	I1014 15:49:48.922065       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 15:49:48.922212       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 15:49:48.923125       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 15:49:48.923228       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 15:49:58.924765       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 15:49:58.924823       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 15:49:58.925555       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 15:49:58.925670       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 15:49:58.926098       1 main.go:296] Handling node with IPs: map[172.20.106.123:{}]
	I1014 15:49:58.926121       1 main.go:300] handling current node
	I1014 15:50:08.935195       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 15:50:08.935232       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 15:50:08.935460       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 15:50:08.935472       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 15:50:08.935566       1 main.go:296] Handling node with IPs: map[172.20.106.123:{}]
	I1014 15:50:08.935575       1 main.go:300] handling current node
	
	
	==> kindnet [fcdf89a3ac8c] <==
	I1014 15:43:04.863764       1 main.go:300] handling current node
	I1014 15:43:14.871242       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 15:43:14.871727       1 main.go:300] handling current node
	I1014 15:43:14.871818       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 15:43:14.871846       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 15:43:14.872085       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 15:43:14.872201       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 15:43:24.871405       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 15:43:24.871540       1 main.go:300] handling current node
	I1014 15:43:24.871566       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 15:43:24.871575       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 15:43:24.871835       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 15:43:24.872193       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 15:43:34.863042       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 15:43:34.863237       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	I1014 15:43:34.863962       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 15:43:34.864059       1 main.go:300] handling current node
	I1014 15:43:34.864077       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 15:43:34.864085       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 15:43:44.871016       1 main.go:296] Handling node with IPs: map[172.20.100.167:{}]
	I1014 15:43:44.871057       1 main.go:300] handling current node
	I1014 15:43:44.871074       1 main.go:296] Handling node with IPs: map[172.20.109.137:{}]
	I1014 15:43:44.871081       1 main.go:323] Node multinode-671000-m02 has CIDR [10.244.1.0/24] 
	I1014 15:43:44.871299       1 main.go:296] Handling node with IPs: map[172.20.102.29:{}]
	I1014 15:43:44.871310       1 main.go:323] Node multinode-671000-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [a834664fc8b8] <==
	I1014 15:46:15.221529       1 shared_informer.go:320] Caches are synced for configmaps
	I1014 15:46:15.226910       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1014 15:46:15.227013       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1014 15:46:15.229937       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1014 15:46:15.231898       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 15:46:15.233234       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1014 15:46:15.234375       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1014 15:46:15.235151       1 aggregator.go:171] initial CRD sync complete...
	I1014 15:46:15.235400       1 autoregister_controller.go:144] Starting autoregister controller
	I1014 15:46:15.235712       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1014 15:46:15.235936       1 cache.go:39] Caches are synced for autoregister controller
	I1014 15:46:15.255261       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1014 15:46:15.256039       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1014 15:46:15.271561       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1014 15:46:15.319091       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1014 15:46:16.036564       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1014 15:46:16.558489       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.20.100.167 172.20.106.123]
	I1014 15:46:16.560272       1 controller.go:615] quota admission added evaluator for: endpoints
	I1014 15:46:16.573015       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 15:46:18.229365       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1014 15:46:18.748102       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1014 15:46:18.793266       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1014 15:46:18.985788       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 15:46:19.024530       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1014 15:46:36.563040       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.20.106.123]
	
	
	==> kube-controller-manager [712aad669c9f] <==
	I1014 15:41:29.185577       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 15:41:34.952323       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-671000-m03\" does not exist"
	I1014 15:41:34.952330       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 15:41:34.966125       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-671000-m03" podCIDRs=["10.244.3.0/24"]
	I1014 15:41:34.966148       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 15:41:34.966505       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 15:41:34.987165       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 15:41:35.003234       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 15:41:35.540526       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 15:41:39.448073       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 15:41:45.343875       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 15:41:53.719761       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 15:41:53.720945       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 15:41:53.741507       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 15:41:54.369330       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 15:42:08.557249       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 15:42:32.770970       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 15:43:29.631595       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-671000-m02"
	I1014 15:43:29.632207       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 15:43:29.853526       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 15:43:35.163131       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 15:43:45.119758       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 15:43:45.151031       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 15:43:45.251625       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.269341ms"
	I1014 15:43:45.252472       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="121.1µs"
	
	
	==> kube-controller-manager [8af48c446f7e] <==
	I1014 15:46:18.721514       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="40.1µs"
	I1014 15:46:18.721809       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="136.173363ms"
	I1014 15:46:18.722033       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.5µs"
	I1014 15:46:18.722234       1 node_lifecycle_controller.go:1036] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1014 15:46:18.777385       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 15:46:18.786812       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 15:46:18.833914       1 shared_informer.go:320] Caches are synced for attach detach
	I1014 15:46:19.252391       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 15:46:19.267855       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 15:46:19.268119       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1014 15:46:59.871635       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 15:46:59.892163       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000"
	I1014 15:47:03.736416       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1014 15:47:13.821153       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 15:47:20.979721       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="114.5µs"
	I1014 15:47:22.061324       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.05527ms"
	I1014 15:47:22.062652       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.8µs"
	I1014 15:47:22.098955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="28.422114ms"
	I1014 15:47:22.099794       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="313.699µs"
	I1014 15:47:23.920002       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m03"
	I1014 15:49:37.469073       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 15:49:37.502688       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-671000-m02"
	I1014 15:49:37.553912       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="32.903377ms"
	I1014 15:49:37.574097       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="20.016686ms"
	I1014 15:49:37.574174       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.5µs"
	
	
	==> kube-proxy [e83db276dec3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1014 15:46:18.020523       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1014 15:46:18.173230       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.20.106.123"]
	E1014 15:46:18.173392       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 15:46:18.286207       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 15:46:18.287289       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 15:46:18.287905       1 server_linux.go:169] "Using iptables Proxier"
	I1014 15:46:18.293792       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 15:46:18.300740       1 server.go:483] "Version info" version="v1.31.1"
	I1014 15:46:18.300778       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 15:46:18.305824       1 config.go:199] "Starting service config controller"
	I1014 15:46:18.308209       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 15:46:18.308868       1 config.go:328] "Starting node config controller"
	I1014 15:46:18.314183       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 15:46:18.309398       1 config.go:105] "Starting endpoint slice config controller"
	I1014 15:46:18.317842       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 15:46:18.419882       1 shared_informer.go:320] Caches are synced for node config
	I1014 15:46:18.419918       1 shared_informer.go:320] Caches are synced for service config
	I1014 15:46:18.435586       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [ea19428d7036] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1014 15:22:47.546236       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1014 15:22:47.606437       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["172.20.100.167"]
	E1014 15:22:47.606505       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 15:22:47.788755       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 15:22:47.788820       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 15:22:47.788852       1 server_linux.go:169] "Using iptables Proxier"
	I1014 15:22:47.807050       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 15:22:47.807424       1 server.go:483] "Version info" version="v1.31.1"
	I1014 15:22:47.807440       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 15:22:47.810361       1 config.go:199] "Starting service config controller"
	I1014 15:22:47.810416       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 15:22:47.810600       1 config.go:105] "Starting endpoint slice config controller"
	I1014 15:22:47.810629       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 15:22:47.811297       1 config.go:328] "Starting node config controller"
	I1014 15:22:47.811309       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 15:22:47.911443       1 shared_informer.go:320] Caches are synced for node config
	I1014 15:22:47.911791       1 shared_informer.go:320] Caches are synced for service config
	I1014 15:22:47.911866       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [661e75bbf6b4] <==
	W1014 15:22:37.270258       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1014 15:22:37.270286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 15:22:37.290857       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1014 15:22:37.290997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 15:22:37.304519       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1014 15:22:37.305020       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1014 15:22:37.328746       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1014 15:22:37.329049       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 15:22:37.338059       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1014 15:22:37.338269       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 15:22:37.394728       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1014 15:22:37.394803       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 15:22:37.445455       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1014 15:22:37.445612       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 15:22:37.490285       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1014 15:22:37.490817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 15:22:37.537304       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1014 15:22:37.537396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 15:22:37.713713       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1014 15:22:37.713777       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1014 15:22:39.593596       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 15:43:46.388691       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1014 15:43:46.388783       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1014 15:43:46.389141       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1014 15:43:46.389549       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d428685276e1] <==
	I1014 15:46:12.515594       1 serving.go:386] Generated self-signed cert in-memory
	W1014 15:46:15.152686       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1014 15:46:15.152818       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1014 15:46:15.152851       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1014 15:46:15.153007       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 15:46:15.250163       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1014 15:46:15.250420       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 15:46:15.258344       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1014 15:46:15.258735       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 15:46:15.263966       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 15:46:15.258753       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 15:46:15.365145       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 14 15:46:58 multinode-671000 kubelet[1622]: E1014 15:46:58.908804    1622 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7c65d6cfc9-fs9ct" podUID="fd736862-9e3e-4a3d-9a86-08efd2338477"
	Oct 14 15:46:59 multinode-671000 kubelet[1622]: I1014 15:46:59.853068    1622 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
	Oct 14 15:47:02 multinode-671000 kubelet[1622]: I1014 15:47:02.908981    1622 scope.go:117] "RemoveContainer" containerID="c76c2585681071ddc096fbc042a644b58d0b7d5d604107c6906a076c7aa1cca6"
	Oct 14 15:47:09 multinode-671000 kubelet[1622]: I1014 15:47:09.901385    1622 scope.go:117] "RemoveContainer" containerID="0b5a6e440d7b67606ed0a4dfa4d07715b1fd7e6f53bc0b8779f86a33c5baf6e9"
	Oct 14 15:47:09 multinode-671000 kubelet[1622]: I1014 15:47:09.946936    1622 scope.go:117] "RemoveContainer" containerID="1ba3cd8bbd5963097f4d674fc98eca21e1a710f5a150a067747aa4e6c922d2fe"
	Oct 14 15:47:09 multinode-671000 kubelet[1622]: E1014 15:47:09.949713    1622 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 15:47:09 multinode-671000 kubelet[1622]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 15:47:09 multinode-671000 kubelet[1622]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 15:47:09 multinode-671000 kubelet[1622]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 15:47:09 multinode-671000 kubelet[1622]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 15:48:09 multinode-671000 kubelet[1622]: E1014 15:48:09.942793    1622 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 15:48:09 multinode-671000 kubelet[1622]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 15:48:09 multinode-671000 kubelet[1622]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 15:48:09 multinode-671000 kubelet[1622]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 15:48:09 multinode-671000 kubelet[1622]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 15:49:09 multinode-671000 kubelet[1622]: E1014 15:49:09.941933    1622 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 15:49:09 multinode-671000 kubelet[1622]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 15:49:09 multinode-671000 kubelet[1622]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 15:49:09 multinode-671000 kubelet[1622]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 15:49:09 multinode-671000 kubelet[1622]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 15:50:09 multinode-671000 kubelet[1622]: E1014 15:50:09.948553    1622 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 15:50:09 multinode-671000 kubelet[1622]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 15:50:09 multinode-671000 kubelet[1622]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 15:50:09 multinode-671000 kubelet[1622]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 15:50:09 multinode-671000 kubelet[1622]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-671000 -n multinode-671000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-671000 -n multinode-671000: (11.9684525s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-671000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-7dff88458-52zfl
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/RestartKeepsNodes]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-671000 describe pod busybox-7dff88458-52zfl
helpers_test.go:282: (dbg) kubectl --context multinode-671000 describe pod busybox-7dff88458-52zfl:

                                                
                                                
-- stdout --
	Name:             busybox-7dff88458-52zfl
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7dff88458
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7dff88458
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-frmkx (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-frmkx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  52s   default-scheduler  0/3 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable. preemption: 0/3 nodes are available: 1 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (479.39s)

                                                
                                    
x
+
TestKubernetesUpgrade (953.96s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-827800 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-827800 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: (5m44.5367651s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-827800
E1014 09:12:13.948662     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-827800: (35.2871689s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-827800 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-827800 status --format={{.Host}}: exit status 7 (2.3421774s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-827800 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=hyperv
E1014 09:14:10.854104     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-827800 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=hyperv: exit status 90 (8m17.3628956s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-827800] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "kubernetes-upgrade-827800" primary control-plane node in "kubernetes-upgrade-827800" cluster
	* Restarting existing hyperv VM for "kubernetes-upgrade-827800" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 09:12:37.103881    9416 out.go:345] Setting OutFile to fd 1552 ...
	I1014 09:12:37.104878    9416 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 09:12:37.104878    9416 out.go:358] Setting ErrFile to fd 1156...
	I1014 09:12:37.104878    9416 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 09:12:37.129883    9416 out.go:352] Setting JSON to false
	I1014 09:12:37.132878    9416 start.go:129] hostinfo: {"hostname":"minikube1","uptime":107871,"bootTime":1728814485,"procs":207,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5011 Build 19045.5011","kernelVersion":"10.0.19045.5011 Build 19045.5011","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1014 09:12:37.133886    9416 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 09:12:37.183493    9416 out.go:177] * [kubernetes-upgrade-827800] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	I1014 09:12:37.284973    9416 notify.go:220] Checking for updates...
	I1014 09:12:37.346688    9416 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 09:12:37.399371    9416 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 09:12:37.403223    9416 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1014 09:12:37.407662    9416 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 09:12:37.411588    9416 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 09:12:37.416217    9416 config.go:182] Loaded profile config "kubernetes-upgrade-827800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1014 09:12:37.417447    9416 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 09:12:43.081103    9416 out.go:177] * Using the hyperv driver based on existing profile
	I1014 09:12:43.090327    9416 start.go:297] selected driver: hyperv
	I1014 09:12:43.090327    9416 start.go:901] validating driver "hyperv" against &{Name:kubernetes-upgrade-827800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-827800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.111.99 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 09:12:43.090327    9416 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 09:12:43.146570    9416 cni.go:84] Creating CNI manager for ""
	I1014 09:12:43.146646    9416 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 09:12:43.146884    9416 start.go:340] cluster config:
	{Name:kubernetes-upgrade-827800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-827800 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.20.111.99 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 09:12:43.147153    9416 iso.go:125] acquiring lock: {Name:mkb71635bcbccba29dce9048ad4cc71430a7e577 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 09:12:43.246886    9416 out.go:177] * Starting "kubernetes-upgrade-827800" primary control-plane node in "kubernetes-upgrade-827800" cluster
	I1014 09:12:43.298197    9416 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 09:12:43.298597    9416 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1014 09:12:43.298658    9416 cache.go:56] Caching tarball of preloaded images
	I1014 09:12:43.299110    9416 preload.go:172] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1014 09:12:43.299279    9416 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 09:12:43.299541    9416 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kubernetes-upgrade-827800\config.json ...
	I1014 09:12:43.302100    9416 start.go:360] acquireMachinesLock for kubernetes-upgrade-827800: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 09:18:00.331080    9416 start.go:364] duration metric: took 5m17.0278611s to acquireMachinesLock for "kubernetes-upgrade-827800"
	I1014 09:18:00.331164    9416 start.go:96] Skipping create...Using existing machine configuration
	I1014 09:18:00.331164    9416 fix.go:54] fixHost starting: 
	I1014 09:18:00.348025    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-827800 ).state
	I1014 09:18:02.462565    9416 main.go:141] libmachine: [stdout =====>] : Off
	
	I1014 09:18:02.462565    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:18:02.462701    9416 fix.go:112] recreateIfNeeded on kubernetes-upgrade-827800: state=Stopped err=<nil>
	W1014 09:18:02.462701    9416 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 09:18:02.466368    9416 out.go:177] * Restarting existing hyperv VM for "kubernetes-upgrade-827800" ...
	I1014 09:18:02.468999    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM kubernetes-upgrade-827800
	I1014 09:18:06.164415    9416 main.go:141] libmachine: [stdout =====>] : 
	I1014 09:18:06.164468    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:18:06.164468    9416 main.go:141] libmachine: Waiting for host to start...
	I1014 09:18:06.164575    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-827800 ).state
	I1014 09:18:08.572337    9416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 09:18:08.572337    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:18:08.573077    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-827800 ).networkadapters[0]).ipaddresses[0]
	I1014 09:18:11.194640    9416 main.go:141] libmachine: [stdout =====>] : 
	I1014 09:18:11.194706    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:18:12.195767    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-827800 ).state
	I1014 09:18:14.328122    9416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 09:18:14.328122    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:18:14.328122    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-827800 ).networkadapters[0]).ipaddresses[0]
	I1014 09:18:17.006015    9416 main.go:141] libmachine: [stdout =====>] : 
	I1014 09:18:17.007014    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:18:18.007726    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-827800 ).state
	I1014 09:18:20.423272    9416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 09:18:20.423272    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:18:20.423724    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-827800 ).networkadapters[0]).ipaddresses[0]
	I1014 09:18:22.991161    9416 main.go:141] libmachine: [stdout =====>] : 
	I1014 09:18:22.991161    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:18:23.992359    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-827800 ).state
	I1014 09:18:26.280275    9416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 09:18:26.280369    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:18:26.280467    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-827800 ).networkadapters[0]).ipaddresses[0]
	I1014 09:18:28.737740    9416 main.go:141] libmachine: [stdout =====>] : 
	I1014 09:18:28.737740    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:18:29.738562    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-827800 ).state
	I1014 09:18:31.976357    9416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 09:18:31.976485    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:18:31.976485    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-827800 ).networkadapters[0]).ipaddresses[0]
	I1014 09:18:34.641593    9416 main.go:141] libmachine: [stdout =====>] : 172.20.98.164
	
	I1014 09:18:34.642037    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:18:34.645400    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-827800 ).state
	I1014 09:18:36.731383    9416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 09:18:36.731383    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:18:36.732201    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-827800 ).networkadapters[0]).ipaddresses[0]
	I1014 09:18:39.242914    9416 main.go:141] libmachine: [stdout =====>] : 172.20.98.164
	
	I1014 09:18:39.242914    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:18:39.243333    9416 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kubernetes-upgrade-827800\config.json ...
	I1014 09:18:39.246496    9416 machine.go:93] provisionDockerMachine start ...
	I1014 09:18:39.246575    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-827800 ).state
	I1014 09:18:41.380894    9416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 09:18:41.380894    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:18:41.380894    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-827800 ).networkadapters[0]).ipaddresses[0]
	I1014 09:18:43.909713    9416 main.go:141] libmachine: [stdout =====>] : 172.20.98.164
	
	I1014 09:18:43.910438    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:18:43.915875    9416 main.go:141] libmachine: Using SSH client type: native
	I1014 09:18:43.918986    9416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.98.164 22 <nil> <nil>}
	I1014 09:18:43.918986    9416 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 09:18:44.042818    9416 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 09:18:44.042981    9416 buildroot.go:166] provisioning hostname "kubernetes-upgrade-827800"
	I1014 09:18:44.042981    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-827800 ).state
	I1014 09:18:46.194024    9416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 09:18:46.194126    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:18:46.194347    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-827800 ).networkadapters[0]).ipaddresses[0]
	I1014 09:18:48.903786    9416 main.go:141] libmachine: [stdout =====>] : 172.20.98.164
	
	I1014 09:18:48.903786    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:18:48.912624    9416 main.go:141] libmachine: Using SSH client type: native
	I1014 09:18:48.913447    9416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.98.164 22 <nil> <nil>}
	I1014 09:18:48.913447    9416 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-827800 && echo "kubernetes-upgrade-827800" | sudo tee /etc/hostname
	I1014 09:18:49.069633    9416 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-827800
	
	I1014 09:18:49.069633    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-827800 ).state
	I1014 09:18:51.375147    9416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 09:18:51.375147    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:18:51.375147    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-827800 ).networkadapters[0]).ipaddresses[0]
	I1014 09:18:54.166252    9416 main.go:141] libmachine: [stdout =====>] : 172.20.98.164
	
	I1014 09:18:54.166252    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:18:54.171890    9416 main.go:141] libmachine: Using SSH client type: native
	I1014 09:18:54.172678    9416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.98.164 22 <nil> <nil>}
	I1014 09:18:54.172810    9416 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-827800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-827800/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-827800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 09:18:54.327841    9416 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 09:18:54.327841    9416 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I1014 09:18:54.327965    9416 buildroot.go:174] setting up certificates
	I1014 09:18:54.328038    9416 provision.go:84] configureAuth start
	I1014 09:18:54.328038    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-827800 ).state
	I1014 09:18:56.594879    9416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 09:18:56.595563    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:18:56.595738    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-827800 ).networkadapters[0]).ipaddresses[0]
	I1014 09:18:59.232772    9416 main.go:141] libmachine: [stdout =====>] : 172.20.98.164
	
	I1014 09:18:59.232976    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:18:59.232976    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-827800 ).state
	I1014 09:19:01.415994    9416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 09:19:01.415994    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:19:01.416234    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-827800 ).networkadapters[0]).ipaddresses[0]
	I1014 09:19:03.998813    9416 main.go:141] libmachine: [stdout =====>] : 172.20.98.164
	
	I1014 09:19:03.998813    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:19:03.998928    9416 provision.go:143] copyHostCerts
	I1014 09:19:03.999185    9416 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I1014 09:19:03.999185    9416 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I1014 09:19:03.999766    9416 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I1014 09:19:04.001738    9416 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I1014 09:19:04.001812    9416 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I1014 09:19:04.002327    9416 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1014 09:19:04.004165    9416 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I1014 09:19:04.004165    9416 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I1014 09:19:04.004165    9416 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I1014 09:19:04.005981    9416 provision.go:117] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-827800 san=[127.0.0.1 172.20.98.164 kubernetes-upgrade-827800 localhost minikube]
	I1014 09:19:04.102459    9416 provision.go:177] copyRemoteCerts
	I1014 09:19:04.112529    9416 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 09:19:04.112529    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-827800 ).state
	I1014 09:19:06.272156    9416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 09:19:06.272156    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:19:06.272994    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-827800 ).networkadapters[0]).ipaddresses[0]
	I1014 09:19:08.925012    9416 main.go:141] libmachine: [stdout =====>] : 172.20.98.164
	
	I1014 09:19:08.925936    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:19:08.926193    9416 sshutil.go:53] new ssh client: &{IP:172.20.98.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-827800\id_rsa Username:docker}
	I1014 09:19:09.031435    9416 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9188986s)
	I1014 09:19:09.032795    9416 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 09:19:09.084732    9416 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 09:19:09.136089    9416 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I1014 09:19:09.185857    9416 provision.go:87] duration metric: took 14.8577956s to configureAuth
	I1014 09:19:09.185857    9416 buildroot.go:189] setting minikube options for container-runtime
	I1014 09:19:09.186576    9416 config.go:182] Loaded profile config "kubernetes-upgrade-827800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 09:19:09.186576    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-827800 ).state
	I1014 09:19:11.326334    9416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 09:19:11.326334    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:19:11.326945    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-827800 ).networkadapters[0]).ipaddresses[0]
	I1014 09:19:13.897269    9416 main.go:141] libmachine: [stdout =====>] : 172.20.98.164
	
	I1014 09:19:13.897842    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:19:13.907931    9416 main.go:141] libmachine: Using SSH client type: native
	I1014 09:19:13.908555    9416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.98.164 22 <nil> <nil>}
	I1014 09:19:13.908555    9416 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1014 09:19:14.041798    9416 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1014 09:19:14.041876    9416 buildroot.go:70] root file system type: tmpfs
	I1014 09:19:14.042098    9416 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1014 09:19:14.042240    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-827800 ).state
	I1014 09:19:16.222431    9416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 09:19:16.222489    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:19:16.222489    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-827800 ).networkadapters[0]).ipaddresses[0]
	I1014 09:19:18.798509    9416 main.go:141] libmachine: [stdout =====>] : 172.20.98.164
	
	I1014 09:19:18.799078    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:19:18.804091    9416 main.go:141] libmachine: Using SSH client type: native
	I1014 09:19:18.804764    9416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.98.164 22 <nil> <nil>}
	I1014 09:19:18.804874    9416 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1014 09:19:18.965617    9416 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1014 09:19:18.965617    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-827800 ).state
	I1014 09:19:21.117327    9416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 09:19:21.118001    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:19:21.118089    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-827800 ).networkadapters[0]).ipaddresses[0]
	I1014 09:19:23.705840    9416 main.go:141] libmachine: [stdout =====>] : 172.20.98.164
	
	I1014 09:19:23.706537    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:19:23.715720    9416 main.go:141] libmachine: Using SSH client type: native
	I1014 09:19:23.716964    9416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.98.164 22 <nil> <nil>}
	I1014 09:19:23.716964    9416 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1014 09:19:26.330800    9416 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1014 09:19:26.330873    9416 machine.go:96] duration metric: took 47.0843014s to provisionDockerMachine
	I1014 09:19:26.330873    9416 start.go:293] postStartSetup for "kubernetes-upgrade-827800" (driver="hyperv")
	I1014 09:19:26.330873    9416 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 09:19:26.344435    9416 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 09:19:26.344435    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-827800 ).state
	I1014 09:19:28.491088    9416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 09:19:28.491189    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:19:28.491330    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-827800 ).networkadapters[0]).ipaddresses[0]
	I1014 09:19:30.985860    9416 main.go:141] libmachine: [stdout =====>] : 172.20.98.164
	
	I1014 09:19:30.985860    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:19:30.986404    9416 sshutil.go:53] new ssh client: &{IP:172.20.98.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-827800\id_rsa Username:docker}
	I1014 09:19:31.097608    9416 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7531648s)
	I1014 09:19:31.111606    9416 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 09:19:31.118898    9416 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 09:19:31.118898    9416 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I1014 09:19:31.119361    9416 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I1014 09:19:31.120315    9416 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem -> 9362.pem in /etc/ssl/certs
	I1014 09:19:31.131476    9416 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 09:19:31.151469    9416 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\9362.pem --> /etc/ssl/certs/9362.pem (1708 bytes)
	I1014 09:19:31.203116    9416 start.go:296] duration metric: took 4.8722355s for postStartSetup
	I1014 09:19:31.203116    9416 fix.go:56] duration metric: took 1m30.8718074s for fixHost
	I1014 09:19:31.203116    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-827800 ).state
	I1014 09:19:33.277216    9416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 09:19:33.277216    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:19:33.278026    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-827800 ).networkadapters[0]).ipaddresses[0]
	I1014 09:19:35.811980    9416 main.go:141] libmachine: [stdout =====>] : 172.20.98.164
	
	I1014 09:19:35.812913    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:19:35.817754    9416 main.go:141] libmachine: Using SSH client type: native
	I1014 09:19:35.818401    9416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.98.164 22 <nil> <nil>}
	I1014 09:19:35.818401    9416 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 09:19:35.948030    9416 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728922775.947306199
	
	I1014 09:19:35.948030    9416 fix.go:216] guest clock: 1728922775.947306199
	I1014 09:19:35.948030    9416 fix.go:229] Guest: 2024-10-14 09:19:35.947306199 -0700 PDT Remote: 2024-10-14 09:19:31.2031168 -0700 PDT m=+414.207904901 (delta=4.744189399s)
	I1014 09:19:35.948030    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-827800 ).state
	I1014 09:19:38.147036    9416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 09:19:38.147116    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:19:38.147230    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-827800 ).networkadapters[0]).ipaddresses[0]
	I1014 09:19:40.730159    9416 main.go:141] libmachine: [stdout =====>] : 172.20.98.164
	
	I1014 09:19:40.730159    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:19:40.736919    9416 main.go:141] libmachine: Using SSH client type: native
	I1014 09:19:40.736919    9416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd823e0] 0xd84f20 <nil>  [] 0s} 172.20.98.164 22 <nil> <nil>}
	I1014 09:19:40.736919    9416 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1728922775
	I1014 09:19:40.879410    9416 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Oct 14 16:19:35 UTC 2024
	
	I1014 09:19:40.879480    9416 fix.go:236] clock set: Mon Oct 14 16:19:35 UTC 2024
	 (err=<nil>)
	I1014 09:19:40.879480    9416 start.go:83] releasing machines lock for "kubernetes-upgrade-827800", held for 1m40.5481552s
	I1014 09:19:40.879798    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-827800 ).state
	I1014 09:19:43.117916    9416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 09:19:43.117970    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:19:43.118065    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-827800 ).networkadapters[0]).ipaddresses[0]
	I1014 09:19:45.720507    9416 main.go:141] libmachine: [stdout =====>] : 172.20.98.164
	
	I1014 09:19:45.720507    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:19:45.724838    9416 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1014 09:19:45.724838    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-827800 ).state
	I1014 09:19:45.735379    9416 ssh_runner.go:195] Run: cat /version.json
	I1014 09:19:45.735929    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-827800 ).state
	I1014 09:19:48.016810    9416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 09:19:48.017694    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:19:48.017694    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-827800 ).networkadapters[0]).ipaddresses[0]
	I1014 09:19:48.057679    9416 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 09:19:48.057679    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:19:48.057679    9416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-827800 ).networkadapters[0]).ipaddresses[0]
	I1014 09:19:50.808553    9416 main.go:141] libmachine: [stdout =====>] : 172.20.98.164
	
	I1014 09:19:50.808637    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:19:50.808907    9416 sshutil.go:53] new ssh client: &{IP:172.20.98.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-827800\id_rsa Username:docker}
	I1014 09:19:50.841306    9416 main.go:141] libmachine: [stdout =====>] : 172.20.98.164
	
	I1014 09:19:50.841306    9416 main.go:141] libmachine: [stderr =====>] : 
	I1014 09:19:50.841372    9416 sshutil.go:53] new ssh client: &{IP:172.20.98.164 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\kubernetes-upgrade-827800\id_rsa Username:docker}
	I1014 09:19:50.907541    9416 ssh_runner.go:235] Completed: cat /version.json: (5.1721535s)
	I1014 09:19:50.923595    9416 ssh_runner.go:195] Run: systemctl --version
	I1014 09:19:50.930555    9416 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.2057089s)
	W1014 09:19:50.930555    9416 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1014 09:19:50.947542    9416 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 09:19:50.957537    9416 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 09:19:50.967583    9416 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1014 09:19:51.002151    9416 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	W1014 09:19:51.029968    9416 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W1014 09:19:51.029968    9416 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1014 09:19:51.041255    9416 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 09:19:51.042215    9416 start.go:495] detecting cgroup driver to use...
	I1014 09:19:51.042215    9416 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 09:19:51.097704    9416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1014 09:19:51.130704    9416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1014 09:19:51.154693    9416 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 09:19:51.165698    9416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 09:19:51.203355    9416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 09:19:51.243345    9416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 09:19:51.287988    9416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 09:19:51.320649    9416 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 09:19:51.367903    9416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 09:19:51.417993    9416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 09:19:51.456064    9416 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 09:19:51.506831    9416 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 09:19:51.529081    9416 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 09:19:51.540985    9416 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 09:19:51.577973    9416 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 09:19:51.610563    9416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 09:19:51.851193    9416 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 09:19:51.889731    9416 start.go:495] detecting cgroup driver to use...
	I1014 09:19:51.906088    9416 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1014 09:19:51.951715    9416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 09:19:51.991510    9416 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 09:19:52.037495    9416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 09:19:52.078173    9416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 09:19:52.120434    9416 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 09:19:52.203858    9416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 09:19:52.236883    9416 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 09:19:52.294254    9416 ssh_runner.go:195] Run: which cri-dockerd
	I1014 09:19:52.311270    9416 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1014 09:19:52.331267    9416 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I1014 09:19:52.376255    9416 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1014 09:19:52.607453    9416 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1014 09:19:52.814473    9416 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1014 09:19:52.814473    9416 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1014 09:19:52.872606    9416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 09:19:53.097214    9416 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1014 09:20:54.222851    9416 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1244238s)
	I1014 09:20:54.234077    9416 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I1014 09:20:54.269545    9416 out.go:201] 
	W1014 09:20:54.272136    9416 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 14 16:19:24 kubernetes-upgrade-827800 systemd[1]: Starting Docker Application Container Engine...
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:24.337250550Z" level=info msg="Starting up"
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:24.339528053Z" level=info msg="containerd not running, starting managed containerd"
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:24.340963555Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=660
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.376251898Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.403600233Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.403786233Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.403960633Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.403985533Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.404534934Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.404644334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.404887634Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.404978234Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.405017134Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.405028834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.405685935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.406445336Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.409438440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.409643940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.409863540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.409955740Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.410433741Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.410603341Z" level=info msg="metadata content store policy set" policy=shared
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.414323146Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.414379646Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.414401746Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.414437146Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.414454346Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.414565246Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415295347Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415494347Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415622447Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415645048Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415662348Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415677948Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415692648Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415707848Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415723848Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415806148Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415829848Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415842248Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415876748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415894548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415907948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415921748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415974548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416014548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416029548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416047448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416078848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416096048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416110348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416187948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416202148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416220048Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416268148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416391948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416417648Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416562649Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416645849Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416685049Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416707249Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416720849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416854949Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416884449Z" level=info msg="NRI interface is disabled by configuration."
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.417188249Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.417539250Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.417669550Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.417774350Z" level=info msg="containerd successfully booted in 0.044257s"
	Oct 14 16:19:25 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:25.408207660Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 14 16:19:25 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:25.598345082Z" level=info msg="Loading containers: start."
	Oct 14 16:19:25 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:25.973430820Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 14 16:19:26 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:26.139876310Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Oct 14 16:19:26 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:26.248556035Z" level=info msg="Loading containers: done."
	Oct 14 16:19:26 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:26.274811965Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 14 16:19:26 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:26.274929765Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 14 16:19:26 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:26.275039366Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 14 16:19:26 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:26.276323667Z" level=info msg="Daemon has completed initialization"
	Oct 14 16:19:26 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:26.327835826Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 14 16:19:26 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:26.328057826Z" level=info msg="API listen on [::]:2376"
	Oct 14 16:19:26 kubernetes-upgrade-827800 systemd[1]: Started Docker Application Container Engine.
	Oct 14 16:19:53 kubernetes-upgrade-827800 systemd[1]: Stopping Docker Application Container Engine...
	Oct 14 16:19:53 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:53.122787969Z" level=info msg="Processing signal 'terminated'"
	Oct 14 16:19:53 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:53.125164269Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 14 16:19:53 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:53.126385169Z" level=info msg="Daemon shutdown complete"
	Oct 14 16:19:53 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:53.126556969Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 14 16:19:53 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:53.126567669Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 14 16:19:54 kubernetes-upgrade-827800 systemd[1]: docker.service: Deactivated successfully.
	Oct 14 16:19:54 kubernetes-upgrade-827800 systemd[1]: Stopped Docker Application Container Engine.
	Oct 14 16:19:54 kubernetes-upgrade-827800 systemd[1]: Starting Docker Application Container Engine...
	Oct 14 16:19:54 kubernetes-upgrade-827800 dockerd[1169]: time="2024-10-14T16:19:54.193916126Z" level=info msg="Starting up"
	Oct 14 16:20:54 kubernetes-upgrade-827800 dockerd[1169]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 14 16:20:54 kubernetes-upgrade-827800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 14 16:20:54 kubernetes-upgrade-827800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 14 16:20:54 kubernetes-upgrade-827800 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Oct 14 16:19:24 kubernetes-upgrade-827800 systemd[1]: Starting Docker Application Container Engine...
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:24.337250550Z" level=info msg="Starting up"
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:24.339528053Z" level=info msg="containerd not running, starting managed containerd"
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:24.340963555Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=660
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.376251898Z" level=info msg="starting containerd" revision=7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c version=v1.7.22
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.403600233Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.403786233Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.403960633Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.403985533Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.404534934Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.404644334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.404887634Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.404978234Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.405017134Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.405028834Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.405685935Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.406445336Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.409438440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.409643940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.409863540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.409955740Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.410433741Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.410603341Z" level=info msg="metadata content store policy set" policy=shared
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.414323146Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.414379646Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.414401746Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.414437146Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.414454346Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.414565246Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415295347Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415494347Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415622447Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415645048Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415662348Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415677948Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415692648Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415707848Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415723848Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415806148Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415829848Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415842248Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415876748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415894548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415907948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415921748Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.415974548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416014548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416029548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416047448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416078848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416096048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416110348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416187948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416202148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416220048Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416268148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416391948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416417648Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416562649Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416645849Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416685049Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416707249Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416720849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416854949Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.416884449Z" level=info msg="NRI interface is disabled by configuration."
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.417188249Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.417539250Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.417669550Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Oct 14 16:19:24 kubernetes-upgrade-827800 dockerd[660]: time="2024-10-14T16:19:24.417774350Z" level=info msg="containerd successfully booted in 0.044257s"
	Oct 14 16:19:25 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:25.408207660Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Oct 14 16:19:25 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:25.598345082Z" level=info msg="Loading containers: start."
	Oct 14 16:19:25 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:25.973430820Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Oct 14 16:19:26 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:26.139876310Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Oct 14 16:19:26 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:26.248556035Z" level=info msg="Loading containers: done."
	Oct 14 16:19:26 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:26.274811965Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Oct 14 16:19:26 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:26.274929765Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Oct 14 16:19:26 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:26.275039366Z" level=info msg="Docker daemon" commit=41ca978 containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
	Oct 14 16:19:26 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:26.276323667Z" level=info msg="Daemon has completed initialization"
	Oct 14 16:19:26 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:26.327835826Z" level=info msg="API listen on /var/run/docker.sock"
	Oct 14 16:19:26 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:26.328057826Z" level=info msg="API listen on [::]:2376"
	Oct 14 16:19:26 kubernetes-upgrade-827800 systemd[1]: Started Docker Application Container Engine.
	Oct 14 16:19:53 kubernetes-upgrade-827800 systemd[1]: Stopping Docker Application Container Engine...
	Oct 14 16:19:53 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:53.122787969Z" level=info msg="Processing signal 'terminated'"
	Oct 14 16:19:53 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:53.125164269Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Oct 14 16:19:53 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:53.126385169Z" level=info msg="Daemon shutdown complete"
	Oct 14 16:19:53 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:53.126556969Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Oct 14 16:19:53 kubernetes-upgrade-827800 dockerd[653]: time="2024-10-14T16:19:53.126567669Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Oct 14 16:19:54 kubernetes-upgrade-827800 systemd[1]: docker.service: Deactivated successfully.
	Oct 14 16:19:54 kubernetes-upgrade-827800 systemd[1]: Stopped Docker Application Container Engine.
	Oct 14 16:19:54 kubernetes-upgrade-827800 systemd[1]: Starting Docker Application Container Engine...
	Oct 14 16:19:54 kubernetes-upgrade-827800 dockerd[1169]: time="2024-10-14T16:19:54.193916126Z" level=info msg="Starting up"
	Oct 14 16:20:54 kubernetes-upgrade-827800 dockerd[1169]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Oct 14 16:20:54 kubernetes-upgrade-827800 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Oct 14 16:20:54 kubernetes-upgrade-827800 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 14 16:20:54 kubernetes-upgrade-827800 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W1014 09:20:54.273180    9416 out.go:270] * 
	* 
	W1014 09:20:54.275201    9416 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 09:20:54.278667    9416 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-827800 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=hyperv : exit status 90
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-827800 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-827800 version --output=json: exit status 1 (144.987ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-827800" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-10-14 09:20:54.6470163 -0700 PDT m=+9580.663096201
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-827800 -n kubernetes-upgrade-827800
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-827800 -n kubernetes-upgrade-827800: exit status 6 (12.4184478s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 09:21:06.990604   11848 status.go:458] kubeconfig endpoint: get endpoint: "kubernetes-upgrade-827800" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-827800" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-827800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-827800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-827800: (1m1.7000692s)
--- FAIL: TestKubernetesUpgrade (953.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (303s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-204300 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-204300 --driver=hyperv: exit status 1 (4m59.7598366s)

                                                
                                                
-- stdout --
	* [NoKubernetes-204300] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-204300" primary control-plane node in "NoKubernetes-204300" cluster
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-204300 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-204300 -n NoKubernetes-204300
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-204300 -n NoKubernetes-204300: exit status 7 (3.2399929s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 09:11:18.086850    1244 main.go:137] libmachine: [stderr =====>] : Hyper-V\Get-VM : Hyper-V was unable to find a virtual machine with name "NoKubernetes-204300".
	At line:1 char:3
	+ ( Hyper-V\Get-VM NoKubernetes-204300 ).state
	+   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
	    + CategoryInfo          : InvalidArgument: (NoKubernetes-204300:String) [Get-VM], VirtualizationException
	    + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.GetVM
	 
	

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-204300" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (303.00s)

                                                
                                    

Test pass (154/203)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 20.2
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.31
9 TestDownloadOnly/v1.20.0/DeleteAll 0.64
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.81
12 TestDownloadOnly/v1.31.1/json-events 10.89
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.27
18 TestDownloadOnly/v1.31.1/DeleteAll 0.86
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.66
21 TestBinaryMirror 7.05
22 TestOffline 268.42
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.21
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.19
27 TestAddons/Setup 433.42
29 TestAddons/serial/Volcano 65.55
31 TestAddons/serial/GCPAuth/Namespaces 0.35
32 TestAddons/serial/GCPAuth/PullSecret 9.6
34 TestAddons/parallel/Registry 34.25
35 TestAddons/parallel/Ingress 59.44
36 TestAddons/parallel/InspektorGadget 27.17
37 TestAddons/parallel/MetricsServer 22.81
39 TestAddons/parallel/CSI 90.67
40 TestAddons/parallel/Headlamp 42.17
41 TestAddons/parallel/CloudSpanner 20.47
42 TestAddons/parallel/LocalPath 35.27
43 TestAddons/parallel/NvidiaDevicePlugin 22.21
44 TestAddons/parallel/Yakd 27.1
46 TestAddons/StoppedEnableDisable 52.3
47 TestCertOptions 554.8
48 TestCertExpiration 890.94
49 TestDockerFlags 469.15
50 TestForceSystemdFlag 505.98
51 TestForceSystemdEnv 406.63
58 TestErrorSpam/start 16.92
59 TestErrorSpam/status 35.83
60 TestErrorSpam/pause 22.32
61 TestErrorSpam/unpause 22.4
62 TestErrorSpam/stop 55.56
65 TestFunctional/serial/CopySyncFile 0.04
66 TestFunctional/serial/StartWithProxy 194.87
67 TestFunctional/serial/AuditLog 0
68 TestFunctional/serial/SoftStart 125.66
69 TestFunctional/serial/KubeContext 0.14
70 TestFunctional/serial/KubectlGetPods 0.23
73 TestFunctional/serial/CacheCmd/cache/add_remote 26.3
74 TestFunctional/serial/CacheCmd/cache/add_local 9.88
75 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.2
76 TestFunctional/serial/CacheCmd/cache/list 0.2
77 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 9.19
78 TestFunctional/serial/CacheCmd/cache/cache_reload 35.41
79 TestFunctional/serial/CacheCmd/cache/delete 0.4
80 TestFunctional/serial/MinikubeKubectlCmd 0.43
82 TestFunctional/serial/ExtraConfig 126.3
83 TestFunctional/serial/ComponentHealth 0.19
84 TestFunctional/serial/LogsCmd 8.47
85 TestFunctional/serial/LogsFileCmd 10.54
86 TestFunctional/serial/InvalidService 20.66
88 TestFunctional/parallel/ConfigCmd 1.4
92 TestFunctional/parallel/StatusCmd 41.75
96 TestFunctional/parallel/ServiceCmdConnect 26.46
97 TestFunctional/parallel/AddonsCmd 0.52
98 TestFunctional/parallel/PersistentVolumeClaim 37.78
100 TestFunctional/parallel/SSHCmd 20.66
101 TestFunctional/parallel/CpCmd 60.47
102 TestFunctional/parallel/MySQL 67.57
103 TestFunctional/parallel/FileSync 11.09
104 TestFunctional/parallel/CertSync 62.31
108 TestFunctional/parallel/NodeLabels 0.25
110 TestFunctional/parallel/NonActiveRuntimeDisabled 10.88
112 TestFunctional/parallel/License 3.33
113 TestFunctional/parallel/ImageCommands/ImageListShort 7.38
114 TestFunctional/parallel/ImageCommands/ImageListTable 7.36
115 TestFunctional/parallel/ImageCommands/ImageListJson 7.6
116 TestFunctional/parallel/ImageCommands/ImageListYaml 7.59
117 TestFunctional/parallel/ImageCommands/ImageBuild 28.08
118 TestFunctional/parallel/ImageCommands/Setup 2.12
119 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 17.32
120 TestFunctional/parallel/Version/short 0.44
121 TestFunctional/parallel/Version/components 8.05
122 TestFunctional/parallel/ServiceCmd/DeployApp 16.49
123 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 17.92
124 TestFunctional/parallel/ServiceCmd/List 14.19
126 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 9.69
127 TestFunctional/parallel/ServiceCmd/JSONOutput 15.03
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 19.11
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 16.64
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 8.07
139 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.21
140 TestFunctional/parallel/ImageCommands/ImageRemove 15.59
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 15.88
144 TestFunctional/parallel/ProfileCmd/profile_not_create 14.25
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 8.99
146 TestFunctional/parallel/ProfileCmd/profile_list 14.18
147 TestFunctional/parallel/ProfileCmd/profile_json_output 14.19
148 TestFunctional/parallel/DockerEnv/powershell 44.62
149 TestFunctional/parallel/UpdateContextCmd/no_changes 2.43
150 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.46
151 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.54
152 TestFunctional/delete_echo-server_images 0.24
153 TestFunctional/delete_my-image_image 0.1
154 TestFunctional/delete_minikube_cached_images 0.09
162 TestMultiControlPlane/serial/NodeLabels 0.18
170 TestImageBuild/serial/Setup 193.12
171 TestImageBuild/serial/NormalBuild 10.25
172 TestImageBuild/serial/BuildWithBuildArg 8.63
173 TestImageBuild/serial/BuildWithDockerIgnore 8.02
174 TestImageBuild/serial/BuildWithSpecifiedDockerfile 8.08
178 TestJSONOutput/start/Command 198.28
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 7.73
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 7.73
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 39.21
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.84
206 TestMainNoArgs 0.19
207 TestMinikubeProfile 517.8
210 TestMountStart/serial/StartWithMountFirst 151.54
211 TestMountStart/serial/VerifyMountFirst 9.36
212 TestMountStart/serial/StartWithMountSecond 152.6
213 TestMountStart/serial/VerifyMountSecond 9.29
214 TestMountStart/serial/DeleteFirst 30.08
215 TestMountStart/serial/VerifyMountPostDelete 9.15
216 TestMountStart/serial/Stop 29.07
217 TestMountStart/serial/RestartStopped 114.77
218 TestMountStart/serial/VerifyMountPostStop 9.06
221 TestMultiNode/serial/FreshStart2Nodes 424.26
222 TestMultiNode/serial/DeployApp2Nodes 9.03
224 TestMultiNode/serial/AddNode 232.64
225 TestMultiNode/serial/MultiNodeLabels 0.18
226 TestMultiNode/serial/ProfileList 34.73
227 TestMultiNode/serial/CopyFile 349.43
228 TestMultiNode/serial/StopNode 74.76
229 TestMultiNode/serial/StartAfterStop 190.03
234 TestPreload 495.57
235 TestScheduledStopWindows 322.57
240 TestRunningBinaryUpgrade 1042.24
245 TestNoKubernetes/serial/StartNoK8sWithVersion 0.31
247 TestStoppedBinaryUpgrade/Setup 1.03
248 TestStoppedBinaryUpgrade/Upgrade 774.42
267 TestStoppedBinaryUpgrade/MinikubeLogs 9.68
x
+
TestDownloadOnly/v1.20.0/json-events (20.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-763300 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-763300 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (20.1991901s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (20.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1014 06:41:34.242169     936 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1014 06:41:34.243796     936 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-763300
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-763300: exit status 85 (311.4089ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-763300 | minikube1\jenkins | v1.34.0 | 14 Oct 24 06:41 PDT |          |
	|         | -p download-only-763300        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 06:41:14
	Running on machine: minikube1
	Binary: Built with gc go1.23.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 06:41:14.153896    3628 out.go:345] Setting OutFile to fd 760 ...
	I1014 06:41:14.155405    3628 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 06:41:14.155405    3628 out.go:358] Setting ErrFile to fd 764...
	I1014 06:41:14.155405    3628 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1014 06:41:14.171389    3628 root.go:314] Error reading config file at C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I1014 06:41:14.182676    3628 out.go:352] Setting JSON to true
	I1014 06:41:14.186543    3628 start.go:129] hostinfo: {"hostname":"minikube1","uptime":98788,"bootTime":1728814485,"procs":205,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5011 Build 19045.5011","kernelVersion":"10.0.19045.5011 Build 19045.5011","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1014 06:41:14.186543    3628 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 06:41:14.192662    3628 out.go:97] [download-only-763300] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	I1014 06:41:14.193509    3628 notify.go:220] Checking for updates...
	W1014 06:41:14.193509    3628 preload.go:293] Failed to list preload files: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I1014 06:41:14.196594    3628 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 06:41:14.199401    3628 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1014 06:41:14.202977    3628 out.go:169] MINIKUBE_LOCATION=19790
	I1014 06:41:14.206166    3628 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1014 06:41:14.210691    3628 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1014 06:41:14.211464    3628 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 06:41:19.676700    3628 out.go:97] Using the hyperv driver based on user configuration
	I1014 06:41:19.676839    3628 start.go:297] selected driver: hyperv
	I1014 06:41:19.677010    3628 start.go:901] validating driver "hyperv" against <nil>
	I1014 06:41:19.677413    3628 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 06:41:19.728359    3628 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I1014 06:41:19.729769    3628 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1014 06:41:19.730379    3628 cni.go:84] Creating CNI manager for ""
	I1014 06:41:19.730379    3628 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1014 06:41:19.730379    3628 start.go:340] cluster config:
	{Name:download-only-763300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-763300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 06:41:19.731171    3628 iso.go:125] acquiring lock: {Name:mkb71635bcbccba29dce9048ad4cc71430a7e577 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 06:41:19.734755    3628 out.go:97] Downloading VM boot image ...
	I1014 06:41:19.735095    3628 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.34.0-1728382514-19774-amd64.iso
	I1014 06:41:27.185567    3628 out.go:97] Starting "download-only-763300" primary control-plane node in "download-only-763300" cluster
	I1014 06:41:27.186063    3628 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1014 06:41:27.229291    3628 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I1014 06:41:27.229291    3628 cache.go:56] Caching tarball of preloaded images
	I1014 06:41:27.229837    3628 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1014 06:41:27.233243    3628 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1014 06:41:27.233243    3628 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I1014 06:41:27.302851    3628 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I1014 06:41:30.716960    3628 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I1014 06:41:30.717341    3628 preload.go:254] verifying checksum of C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I1014 06:41:31.781506    3628 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1014 06:41:31.782524    3628 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-763300\config.json ...
	I1014 06:41:31.783189    3628 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-763300\config.json: {Name:mkc57e430ac1024ae84cd3b14ea98a5f67636edf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 06:41:31.784551    3628 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1014 06:41:31.786340    3628 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-763300 host does not exist
	  To start a cluster, run: "minikube start -p download-only-763300"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-763300
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (10.89s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-730500 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-730500 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=hyperv: (10.8904337s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (10.89s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1014 06:41:46.895568     936 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I1014 06:41:46.895869     936 preload.go:146] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-730500
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-730500: exit status 85 (273.126ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-763300 | minikube1\jenkins | v1.34.0 | 14 Oct 24 06:41 PDT |                     |
	|         | -p download-only-763300        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube1\jenkins | v1.34.0 | 14 Oct 24 06:41 PDT | 14 Oct 24 06:41 PDT |
	| delete  | -p download-only-763300        | download-only-763300 | minikube1\jenkins | v1.34.0 | 14 Oct 24 06:41 PDT | 14 Oct 24 06:41 PDT |
	| start   | -o=json --download-only        | download-only-730500 | minikube1\jenkins | v1.34.0 | 14 Oct 24 06:41 PDT |                     |
	|         | -p download-only-730500        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 06:41:36
	Running on machine: minikube1
	Binary: Built with gc go1.23.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 06:41:36.125124   11444 out.go:345] Setting OutFile to fd 808 ...
	I1014 06:41:36.127119   11444 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 06:41:36.127119   11444 out.go:358] Setting ErrFile to fd 736...
	I1014 06:41:36.127119   11444 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 06:41:36.150549   11444 out.go:352] Setting JSON to true
	I1014 06:41:36.154091   11444 start.go:129] hostinfo: {"hostname":"minikube1","uptime":98810,"bootTime":1728814485,"procs":205,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5011 Build 19045.5011","kernelVersion":"10.0.19045.5011 Build 19045.5011","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1014 06:41:36.154091   11444 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 06:41:36.159946   11444 out.go:97] [download-only-730500] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	I1014 06:41:36.159946   11444 notify.go:220] Checking for updates...
	I1014 06:41:36.162492   11444 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 06:41:36.165455   11444 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1014 06:41:36.169183   11444 out.go:169] MINIKUBE_LOCATION=19790
	I1014 06:41:36.171909   11444 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1014 06:41:36.177224   11444 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1014 06:41:36.178430   11444 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 06:41:41.696232   11444 out.go:97] Using the hyperv driver based on user configuration
	I1014 06:41:41.696315   11444 start.go:297] selected driver: hyperv
	I1014 06:41:41.696348   11444 start.go:901] validating driver "hyperv" against <nil>
	I1014 06:41:41.696663   11444 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 06:41:41.744054   11444 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I1014 06:41:41.746433   11444 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1014 06:41:41.746666   11444 cni.go:84] Creating CNI manager for ""
	I1014 06:41:41.746813   11444 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1014 06:41:41.746813   11444 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1014 06:41:41.747086   11444 start.go:340] cluster config:
	{Name:download-only-730500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-730500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 06:41:41.747471   11444 iso.go:125] acquiring lock: {Name:mkb71635bcbccba29dce9048ad4cc71430a7e577 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 06:41:41.751261   11444 out.go:97] Starting "download-only-730500" primary control-plane node in "download-only-730500" cluster
	I1014 06:41:41.751261   11444 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 06:41:41.790759   11444 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1014 06:41:41.790759   11444 cache.go:56] Caching tarball of preloaded images
	I1014 06:41:41.790759   11444 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 06:41:41.794582   11444 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1014 06:41:41.794582   11444 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I1014 06:41:41.868577   11444 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4?checksum=md5:42e9a173dd5f0c45ed1a890dd06aec5a -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I1014 06:41:44.778855   11444 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I1014 06:41:44.779366   11444 preload.go:254] verifying checksum of C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I1014 06:41:45.620959   11444 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1014 06:41:45.621541   11444 profile.go:143] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-730500\config.json ...
	I1014 06:41:45.622170   11444 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\download-only-730500\config.json: {Name:mk0b154fa3057e9bb30958288d1f5520fd9fc828 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 06:41:45.623441   11444 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1014 06:41:45.623845   11444 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\windows\amd64\v1.31.1/kubectl.exe
	
	
	* The control-plane node download-only-730500 host does not exist
	  To start a cluster, run: "minikube start -p download-only-730500"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-730500
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.66s)

                                                
                                    
x
+
TestBinaryMirror (7.05s)

                                                
                                                
=== RUN   TestBinaryMirror
I1014 06:41:50.159167     936 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/windows/amd64/kubectl.exe.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-511500 --alsologtostderr --binary-mirror http://127.0.0.1:56189 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-511500 --alsologtostderr --binary-mirror http://127.0.0.1:56189 --driver=hyperv: (6.329471s)
helpers_test.go:175: Cleaning up "binary-mirror-511500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-511500
--- PASS: TestBinaryMirror (7.05s)

                                                
                                    
x
+
TestOffline (268.42s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-204300 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-204300 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (3m49.085216s)
helpers_test.go:175: Cleaning up "offline-docker-204300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-204300
E1014 09:10:37.384902     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-204300: (39.3380672s)
--- PASS: TestOffline (268.42s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:935: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-043000
addons_test.go:935: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-043000: exit status 85 (209.8686ms)

                                                
                                                
-- stdout --
	* Profile "addons-043000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-043000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:946: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-043000
addons_test.go:946: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-043000: exit status 85 (192.2298ms)

                                                
                                                
-- stdout --
	* Profile "addons-043000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-043000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/Setup (433.42s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-043000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=hyperv --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-043000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=hyperv --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (7m13.415669s)
--- PASS: TestAddons/Setup (433.42s)

                                                
                                    
x
+
TestAddons/serial/Volcano (65.55s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:811: volcano-admission stabilized in 17.2626ms
addons_test.go:803: volcano-scheduler stabilized in 17.2626ms
addons_test.go:819: volcano-controller stabilized in 17.2626ms
addons_test.go:825: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-sgj5l" [d037afa2-2e49-4c78-9bea-30ca46770014] Running
addons_test.go:825: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.0052383s
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-x22cf" [65f74ed0-5c68-4cc1-9b67-0031b3056c62] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.0064005s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-pmxbv" [49f71fad-f869-42a4-9856-112fc3eab0ed] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.0058686s
addons_test.go:838: (dbg) Run:  kubectl --context addons-043000 delete -n volcano-system job volcano-admission-init
addons_test.go:844: (dbg) Run:  kubectl --context addons-043000 create -f testdata\vcjob.yaml
addons_test.go:852: (dbg) Run:  kubectl --context addons-043000 get vcjob -n my-volcano
addons_test.go:870: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [bb0e18b4-86b4-4dab-b3c5-b6f9f4f7480e] Pending
helpers_test.go:344: "test-job-nginx-0" [bb0e18b4-86b4-4dab-b3c5-b6f9f4f7480e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [bb0e18b4-86b4-4dab-b3c5-b6f9f4f7480e] Running
addons_test.go:870: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 22.0078242s
addons_test.go:988: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-043000 addons disable volcano --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-windows-amd64.exe -p addons-043000 addons disable volcano --alsologtostderr -v=1: (25.6501546s)
--- PASS: TestAddons/serial/Volcano (65.55s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.35s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-043000 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-043000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.35s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/PullSecret (9.6s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/PullSecret
addons_test.go:614: (dbg) Run:  kubectl --context addons-043000 create -f testdata\busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-043000 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f6389f25-3b0f-41b1-824f-916a6b5f3814] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f6389f25-3b0f-41b1-824f-916a6b5f3814] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: integration-test=busybox healthy within 8.0067781s
addons_test.go:633: (dbg) Run:  kubectl --context addons-043000 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-043000 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-043000 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-043000 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/PullSecret (9.60s)

                                                
                                    
x
+
TestAddons/parallel/Registry (34.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 7.3764ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-f5df5" [4729f69f-72e0-4737-a535-e13167d62484] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.008943s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-nxmdj" [33f541d2-a27a-428c-8b27-65f625869d21] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0082682s
addons_test.go:331: (dbg) Run:  kubectl --context addons-043000 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-043000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-043000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.099209s)
addons_test.go:350: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-043000 ip
addons_test.go:350: (dbg) Done: out/minikube-windows-amd64.exe -p addons-043000 ip: (2.6238658s)
2024/10/14 06:51:14 [DEBUG] GET http://172.20.107.120:5000
addons_test.go:988: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-043000 addons disable registry --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-windows-amd64.exe -p addons-043000 addons disable registry --alsologtostderr -v=1: (15.2529861s)
--- PASS: TestAddons/parallel/Registry (34.25s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (59.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-043000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-043000 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-043000 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [94259873-6599-4f0b-96f3-9d8cb0f40970] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [94259873-6599-4f0b-96f3-9d8cb0f40970] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.0091736s
I1014 06:52:02.980849     936 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-043000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-043000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (9.3273562s)
addons_test.go:286: (dbg) Run:  kubectl --context addons-043000 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-043000 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p addons-043000 ip: (2.3398568s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.20.107.120
addons_test.go:988: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-043000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-windows-amd64.exe -p addons-043000 addons disable ingress-dns --alsologtostderr -v=1: (14.9839833s)
addons_test.go:988: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-043000 addons disable ingress --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-windows-amd64.exe -p addons-043000 addons disable ingress --alsologtostderr -v=1: (21.817376s)
--- PASS: TestAddons/parallel/Ingress (59.44s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (27.17s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2dcgn" [59376f85-f060-454e-b505-4cb287e1827e] Running
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0068559s
addons_test.go:988: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-043000 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-windows-amd64.exe -p addons-043000 addons disable inspektor-gadget --alsologtostderr -v=1: (21.156989s)
--- PASS: TestAddons/parallel/InspektorGadget (27.17s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (22.81s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 10.3252ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-94khb" [ad06121a-8f2a-4d02-b2dd-b2032c3889ce] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0080797s
addons_test.go:402: (dbg) Run:  kubectl --context addons-043000 top pods -n kube-system
addons_test.go:988: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-043000 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-windows-amd64.exe -p addons-043000 addons disable metrics-server --alsologtostderr -v=1: (16.5728577s)
--- PASS: TestAddons/parallel/MetricsServer (22.81s)

                                                
                                    
x
+
TestAddons/parallel/CSI (90.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1014 06:51:29.703033     936 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1014 06:51:29.710965     936 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1014 06:51:29.710965     936 kapi.go:107] duration metric: took 7.9327ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 7.9327ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-043000 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-043000 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [483c6678-5746-47af-b9c9-1ea3fd5c3538] Pending
helpers_test.go:344: "task-pv-pod" [483c6678-5746-47af-b9c9-1ea3fd5c3538] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [483c6678-5746-47af-b9c9-1ea3fd5c3538] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.0089665s
addons_test.go:511: (dbg) Run:  kubectl --context addons-043000 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-043000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-043000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-043000 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-043000 delete pod task-pv-pod: (1.4367713s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-043000 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-043000 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-043000 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f6ea19f5-df15-4546-bf26-df9d1f14009e] Pending
helpers_test.go:344: "task-pv-pod-restore" [f6ea19f5-df15-4546-bf26-df9d1f14009e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f6ea19f5-df15-4546-bf26-df9d1f14009e] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.0065249s
addons_test.go:553: (dbg) Run:  kubectl --context addons-043000 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-043000 delete pod task-pv-pod-restore: (1.5347421s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-043000 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-043000 delete volumesnapshot new-snapshot-demo
addons_test.go:988: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-043000 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-windows-amd64.exe -p addons-043000 addons disable volumesnapshots --alsologtostderr -v=1: (14.9728217s)
addons_test.go:988: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-043000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-windows-amd64.exe -p addons-043000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (20.6344658s)
--- PASS: TestAddons/parallel/CSI (90.67s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (42.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-043000 --alsologtostderr -v=1
addons_test.go:743: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-043000 --alsologtostderr -v=1: (15.5204137s)
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-d2lql" [02c19c75-545b-44a3-80e7-15f1ff4d4779] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-d2lql" [02c19c75-545b-44a3-80e7-15f1ff4d4779] Running
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 19.015992s
addons_test.go:988: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-043000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-windows-amd64.exe -p addons-043000 addons disable headlamp --alsologtostderr -v=1: (7.6255605s)
--- PASS: TestAddons/parallel/Headlamp (42.17s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (20.47s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-cdnjs" [06905b16-7d98-4c02-aefc-d33ec11a633a] Running
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.1548885s
addons_test.go:988: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-043000 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-windows-amd64.exe -p addons-043000 addons disable cloud-spanner --alsologtostderr -v=1: (15.2961486s)
--- PASS: TestAddons/parallel/CloudSpanner (20.47s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (35.27s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:884: (dbg) Run:  kubectl --context addons-043000 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:890: (dbg) Run:  kubectl --context addons-043000 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:894: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-043000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:897: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [9ef0d5a2-ea03-49e8-aa9c-bb1a9531512f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [9ef0d5a2-ea03-49e8-aa9c-bb1a9531512f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [9ef0d5a2-ea03-49e8-aa9c-bb1a9531512f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:897: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.0070014s
addons_test.go:902: (dbg) Run:  kubectl --context addons-043000 get pvc test-pvc -o=json
addons_test.go:911: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-043000 ssh "cat /opt/local-path-provisioner/pvc-519ab6e0-e6d2-4eb9-ace0-5c155747c835_default_test-pvc/file1"
addons_test.go:911: (dbg) Done: out/minikube-windows-amd64.exe -p addons-043000 ssh "cat /opt/local-path-provisioner/pvc-519ab6e0-e6d2-4eb9-ace0-5c155747c835_default_test-pvc/file1": (9.6830433s)
addons_test.go:923: (dbg) Run:  kubectl --context addons-043000 delete pod test-local-path
addons_test.go:927: (dbg) Run:  kubectl --context addons-043000 delete pvc test-pvc
addons_test.go:988: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-043000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-windows-amd64.exe -p addons-043000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (7.9575855s)
--- PASS: TestAddons/parallel/LocalPath (35.27s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (22.21s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:960: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-lp26n" [07a82f90-e052-44db-a41e-ca106119fc30] Running
addons_test.go:960: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0064406s
addons_test.go:988: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-043000 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-windows-amd64.exe -p addons-043000 addons disable nvidia-device-plugin --alsologtostderr -v=1: (16.2048905s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (22.21s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (27.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:982: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-qpbkz" [7f09a231-e369-439e-b578-e4c0f79c4eb5] Running
addons_test.go:982: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.006511s
addons_test.go:988: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-043000 addons disable yakd --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-windows-amd64.exe -p addons-043000 addons disable yakd --alsologtostderr -v=1: (21.0932058s)
--- PASS: TestAddons/parallel/Yakd (27.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (52.3s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-043000
addons_test.go:170: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-043000: (40.349886s)
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-043000
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-043000: (4.8520172s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-043000
addons_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-043000: (4.5702958s)
addons_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-043000
addons_test.go:183: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-043000: (2.5242102s)
--- PASS: TestAddons/StoppedEnableDisable (52.30s)

                                                
                                    
x
+
TestCertOptions (554.8s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-021000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
E1014 09:25:37.387334     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-021000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (8m14.750885s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-021000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-021000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (9.7322524s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-021000 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-021000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-021000 -- "sudo cat /etc/kubernetes/admin.conf": (9.769222s)
helpers_test.go:175: Cleaning up "cert-options-021000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-021000
E1014 09:33:40.486112     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-021000: (40.3948112s)
--- PASS: TestCertOptions (554.80s)

                                                
                                    
x
+
TestCertExpiration (890.94s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-227900 --memory=2048 --cert-expiration=3m --driver=hyperv
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-227900 --memory=2048 --cert-expiration=3m --driver=hyperv: (6m2.8134265s)
E1014 09:28:53.952759     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 09:29:10.855732     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-227900 --memory=2048 --cert-expiration=8760h --driver=hyperv
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-227900 --memory=2048 --cert-expiration=8760h --driver=hyperv: (5m7.0379484s)
helpers_test.go:175: Cleaning up "cert-expiration-227900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-227900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-227900: (41.0875271s)
--- PASS: TestCertExpiration (890.94s)

                                                
                                    
x
+
TestDockerFlags (469.15s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-258800 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-258800 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (6m43.7686659s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-258800 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-258800 ssh "sudo systemctl show docker --property=Environment --no-pager": (10.1186814s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-258800 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
E1014 09:30:37.386521     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-258800 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (9.6921459s)
helpers_test.go:175: Cleaning up "docker-flags-258800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-258800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-258800: (45.5658888s)
--- PASS: TestDockerFlags (469.15s)

                                                
                                    
x
+
TestForceSystemdFlag (505.98s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-958000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-958000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (7m36.5816021s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-958000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-958000 ssh "docker info --format {{.CgroupDriver}}": (9.666688s)
helpers_test.go:175: Cleaning up "force-systemd-flag-958000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-958000
E1014 09:19:10.855523     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-958000: (39.7283538s)
--- PASS: TestForceSystemdFlag (505.98s)

                                                
                                    
x
+
TestForceSystemdEnv (406.63s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-862600 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
E1014 09:20:37.386451     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-862600 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (5m51.2525296s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-862600 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-862600 ssh "docker info --format {{.CgroupDriver}}": (9.652968s)
helpers_test.go:175: Cleaning up "force-systemd-env-862600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-862600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-862600: (45.7226338s)
--- PASS: TestForceSystemdEnv (406.63s)

                                                
                                    
x
+
TestErrorSpam/start (16.92s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-274900 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-274900 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 start --dry-run: (5.6222608s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-274900 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-274900 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 start --dry-run: (5.6116167s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-274900 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-274900 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 start --dry-run: (5.6870267s)
--- PASS: TestErrorSpam/start (16.92s)

                                                
                                    
x
+
TestErrorSpam/status (35.83s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-274900 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-274900 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 status: (12.2882404s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-274900 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-274900 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 status: (11.8617044s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-274900 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-274900 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 status: (11.6730774s)
--- PASS: TestErrorSpam/status (35.83s)

                                                
                                    
x
+
TestErrorSpam/pause (22.32s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-274900 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-274900 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 pause: (7.630713s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-274900 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-274900 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 pause: (7.3329922s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-274900 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-274900 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 pause: (7.3576832s)
--- PASS: TestErrorSpam/pause (22.32s)

                                                
                                    
x
+
TestErrorSpam/unpause (22.4s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-274900 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-274900 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 unpause: (7.4903847s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-274900 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-274900 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 unpause: (7.4234476s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-274900 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 unpause
E1014 06:59:10.842512     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 06:59:10.849645     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 06:59:10.861804     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 06:59:10.884435     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 06:59:10.926373     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 06:59:11.009102     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 06:59:11.170848     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 06:59:11.493161     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 06:59:12.135060     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 06:59:13.417903     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 06:59:15.980930     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-274900 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 unpause: (7.4872345s)
--- PASS: TestErrorSpam/unpause (22.40s)

                                                
                                    
x
+
TestErrorSpam/stop (55.56s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-274900 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 stop
E1014 06:59:21.102719     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 06:59:31.345802     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 06:59:51.828155     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-274900 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 stop: (33.8799543s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-274900 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-274900 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 stop: (11.0306409s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-274900 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-274900 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-274900 stop: (10.6425573s)
--- PASS: TestErrorSpam/stop (55.56s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\936\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (194.87s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-572000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E1014 07:00:32.790293     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 07:01:54.712869     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-572000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m14.8572809s)
--- PASS: TestFunctional/serial/StartWithProxy (194.87s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (125.66s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1014 07:03:44.736222     936 config.go:182] Loaded profile config "functional-572000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-572000 --alsologtostderr -v=8
E1014 07:04:10.843678     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 07:04:38.555661     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-572000 --alsologtostderr -v=8: (2m5.6534578s)
functional_test.go:663: soft start took 2m5.655703s for "functional-572000" cluster.
I1014 07:05:50.391603     936 config.go:182] Loaded profile config "functional-572000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (125.66s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.14s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-572000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (26.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 cache add registry.k8s.io/pause:3.1: (8.873166s)
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 cache add registry.k8s.io/pause:3.3: (9.0506831s)
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 cache add registry.k8s.io/pause:latest: (8.3793347s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (26.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (9.88s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-572000 C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1619440686\001
functional_test.go:1077: (dbg) Done: docker build -t minikube-local-cache-test:functional-572000 C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1619440686\001: (1.5727698s)
functional_test.go:1089: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 cache add minikube-local-cache-test:functional-572000
functional_test.go:1089: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 cache add minikube-local-cache-test:functional-572000: (7.9858169s)
functional_test.go:1094: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 cache delete minikube-local-cache-test:functional-572000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-572000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (9.88s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 ssh sudo crictl images
functional_test.go:1124: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 ssh sudo crictl images: (9.193065s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (35.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1147: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 ssh sudo docker rmi registry.k8s.io/pause:latest: (9.1171513s)
functional_test.go:1153: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-572000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (9.2139422s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 cache reload: (7.9524016s)
functional_test.go:1163: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1163: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (9.1272246s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (35.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.40s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 kubectl -- --context functional-572000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.43s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (126.3s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-572000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1014 07:09:10.844083     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-572000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m6.2960975s)
functional_test.go:761: restart took 2m6.2974027s for "functional-572000" cluster.
I1014 07:09:52.045892     936 config.go:182] Loaded profile config "functional-572000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (126.30s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-572000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.19s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (8.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 logs
functional_test.go:1236: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 logs: (8.466509s)
--- PASS: TestFunctional/serial/LogsCmd (8.47s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (10.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 logs --file C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialLogsFileCmd4140889447\001\logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 logs --file C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialLogsFileCmd4140889447\001\logs.txt: (10.5288559s)
--- PASS: TestFunctional/serial/LogsFileCmd (10.54s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (20.66s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-572000 apply -f testdata\invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-572000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-572000: exit status 115 (16.3455016s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://172.20.99.72:32153 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube_service_f513297bf07cd3fd942cead3a34f1b094af52476_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-572000 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (20.66s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-572000 config get cpus: exit status 14 (185.7199ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-572000 config get cpus: exit status 14 (180.843ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (41.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 status
functional_test.go:854: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 status: (13.4149672s)
functional_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:860: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (13.9345114s)
functional_test.go:872: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 status -o json
functional_test.go:872: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 status -o json: (14.3988994s)
--- PASS: TestFunctional/parallel/StatusCmd (41.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (26.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-572000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-572000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-t5ddg" [0c32b139-9aee-43b9-be8d-8438af0948fa] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-t5ddg" [0c32b139-9aee-43b9-be8d-8438af0948fa] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.0063155s
functional_test.go:1649: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 service hello-node-connect --url
functional_test.go:1649: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 service hello-node-connect --url: (17.9934521s)
functional_test.go:1655: found endpoint for hello-node-connect: http://172.20.99.72:32137
functional_test.go:1675: http://172.20.99.72:32137: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-t5ddg

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.20.99.72:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.20.99.72:32137
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (26.46s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (37.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8ec5c3c6-2441-47cb-9862-e6c87bce62c2] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0084869s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-572000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-572000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-572000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-572000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [67aa7da2-1304-4561-ba65-e1360e5bfdc3] Pending
helpers_test.go:344: "sp-pod" [67aa7da2-1304-4561-ba65-e1360e5bfdc3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [67aa7da2-1304-4561-ba65-e1360e5bfdc3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.0073446s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-572000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-572000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-572000 delete -f testdata/storage-provisioner/pod.yaml: (1.6812481s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-572000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [918eff34-fe1c-4b10-b7fe-ebba0713ddc9] Pending
helpers_test.go:344: "sp-pod" [918eff34-fe1c-4b10-b7fe-ebba0713ddc9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [918eff34-fe1c-4b10-b7fe-ebba0713ddc9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.0076715s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-572000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (37.78s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (20.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 ssh "echo hello"
functional_test.go:1725: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 ssh "echo hello": (10.1098705s)
functional_test.go:1742: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 ssh "cat /etc/hostname"
functional_test.go:1742: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 ssh "cat /etc/hostname": (10.5531409s)
--- PASS: TestFunctional/parallel/SSHCmd (20.66s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (60.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 cp testdata\cp-test.txt /home/docker/cp-test.txt: (8.5307915s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 ssh -n functional-572000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 ssh -n functional-572000 "sudo cat /home/docker/cp-test.txt": (10.1171708s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 cp functional-572000:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalparallelCpCmd1039540325\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 cp functional-572000:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalparallelCpCmd1039540325\001\cp-test.txt: (10.372152s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 ssh -n functional-572000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 ssh -n functional-572000 "sudo cat /home/docker/cp-test.txt": (11.7328974s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (9.0373601s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 ssh -n functional-572000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 ssh -n functional-572000 "sudo cat /tmp/does/not/exist/cp-test.txt": (10.6703s)
--- PASS: TestFunctional/parallel/CpCmd (60.47s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (67.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-572000 replace --force -f testdata\mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-7rvds" [1d1e2104-c7df-452b-9dae-d7d51aea03dd] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-7rvds" [1d1e2104-c7df-452b-9dae-d7d51aea03dd] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 50.0081211s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-572000 exec mysql-6cdb49bbb-7rvds -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-572000 exec mysql-6cdb49bbb-7rvds -- mysql -ppassword -e "show databases;": exit status 1 (274.6353ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1014 07:13:41.134575     936 retry.go:31] will retry after 1.004366231s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-572000 exec mysql-6cdb49bbb-7rvds -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-572000 exec mysql-6cdb49bbb-7rvds -- mysql -ppassword -e "show databases;": exit status 1 (348.1561ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1014 07:13:42.497662     936 retry.go:31] will retry after 1.33771551s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-572000 exec mysql-6cdb49bbb-7rvds -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-572000 exec mysql-6cdb49bbb-7rvds -- mysql -ppassword -e "show databases;": exit status 1 (412.7578ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1014 07:13:44.259741     936 retry.go:31] will retry after 2.894761891s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-572000 exec mysql-6cdb49bbb-7rvds -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-572000 exec mysql-6cdb49bbb-7rvds -- mysql -ppassword -e "show databases;": exit status 1 (343.4235ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1014 07:13:47.512182     936 retry.go:31] will retry after 2.337508174s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-572000 exec mysql-6cdb49bbb-7rvds -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-572000 exec mysql-6cdb49bbb-7rvds -- mysql -ppassword -e "show databases;": exit status 1 (343.4637ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1014 07:13:50.204157     936 retry.go:31] will retry after 7.465138487s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-572000 exec mysql-6cdb49bbb-7rvds -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (67.57s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (11.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/936/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 ssh "sudo cat /etc/test/nested/copy/936/hosts"
functional_test.go:1931: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 ssh "sudo cat /etc/test/nested/copy/936/hosts": (11.0942192s)
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (11.09s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (62.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/936.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 ssh "sudo cat /etc/ssl/certs/936.pem"
functional_test.go:1973: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 ssh "sudo cat /etc/ssl/certs/936.pem": (10.5699389s)
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/936.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 ssh "sudo cat /usr/share/ca-certificates/936.pem"
functional_test.go:1973: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 ssh "sudo cat /usr/share/ca-certificates/936.pem": (10.7057167s)
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1973: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 ssh "sudo cat /etc/ssl/certs/51391683.0": (11.1000114s)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/9362.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 ssh "sudo cat /etc/ssl/certs/9362.pem"
functional_test.go:2000: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 ssh "sudo cat /etc/ssl/certs/9362.pem": (10.4936784s)
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/9362.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 ssh "sudo cat /usr/share/ca-certificates/9362.pem"
functional_test.go:2000: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 ssh "sudo cat /usr/share/ca-certificates/9362.pem": (9.7101272s)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2000: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (9.723894s)
--- PASS: TestFunctional/parallel/CertSync (62.31s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-572000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (10.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-572000 ssh "sudo systemctl is-active crio": exit status 1 (10.8784033s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (10.88s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2288: (dbg) Done: out/minikube-windows-amd64.exe license: (3.3113662s)
--- PASS: TestFunctional/parallel/License (3.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (7.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 image ls --format short --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 image ls --format short --alsologtostderr: (7.3808919s)
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-572000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-572000
docker.io/kicbase/echo-server:functional-572000
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-572000 image ls --format short --alsologtostderr:
I1014 07:13:29.378779    5468 out.go:345] Setting OutFile to fd 1260 ...
I1014 07:13:29.380604    5468 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 07:13:29.380604    5468 out.go:358] Setting ErrFile to fd 1440...
I1014 07:13:29.380604    5468 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 07:13:29.400956    5468 config.go:182] Loaded profile config "functional-572000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1014 07:13:29.400956    5468 config.go:182] Loaded profile config "functional-572000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1014 07:13:29.402216    5468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-572000 ).state
I1014 07:13:31.625543    5468 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1014 07:13:31.625543    5468 main.go:141] libmachine: [stderr =====>] : 
I1014 07:13:31.638194    5468 ssh_runner.go:195] Run: systemctl --version
I1014 07:13:31.638194    5468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-572000 ).state
I1014 07:13:33.856629    5468 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1014 07:13:33.856629    5468 main.go:141] libmachine: [stderr =====>] : 
I1014 07:13:33.856629    5468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-572000 ).networkadapters[0]).ipaddresses[0]
I1014 07:13:36.460764    5468 main.go:141] libmachine: [stdout =====>] : 172.20.99.72

                                                
                                                
I1014 07:13:36.460764    5468 main.go:141] libmachine: [stderr =====>] : 
I1014 07:13:36.461594    5468 sshutil.go:53] new ssh client: &{IP:172.20.99.72 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-572000\id_rsa Username:docker}
I1014 07:13:36.560978    5468 ssh_runner.go:235] Completed: systemctl --version: (4.9227778s)
I1014 07:13:36.569700    5468 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (7.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (7.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 image ls --format table --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 image ls --format table --alsologtostderr: (7.3632563s)
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-572000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | alpine            | cb8f91112b6b5 | 47MB   |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| docker.io/library/nginx                     | latest            | 7f553e8bbc897 | 192MB  |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| docker.io/kicbase/echo-server               | functional-572000 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-572000 | 3833dfc17f2f4 | 30B    |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-572000 image ls --format table --alsologtostderr:
I1014 07:13:49.518246   12516 out.go:345] Setting OutFile to fd 1520 ...
I1014 07:13:49.518246   12516 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 07:13:49.520637   12516 out.go:358] Setting ErrFile to fd 1452...
I1014 07:13:49.520729   12516 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 07:13:49.540619   12516 config.go:182] Loaded profile config "functional-572000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1014 07:13:49.540619   12516 config.go:182] Loaded profile config "functional-572000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1014 07:13:49.541915   12516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-572000 ).state
I1014 07:13:51.768667   12516 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1014 07:13:51.769121   12516 main.go:141] libmachine: [stderr =====>] : 
I1014 07:13:51.782397   12516 ssh_runner.go:195] Run: systemctl --version
I1014 07:13:51.782397   12516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-572000 ).state
I1014 07:13:53.969185   12516 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1014 07:13:53.969185   12516 main.go:141] libmachine: [stderr =====>] : 
I1014 07:13:53.969185   12516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-572000 ).networkadapters[0]).ipaddresses[0]
I1014 07:13:56.555734   12516 main.go:141] libmachine: [stdout =====>] : 172.20.99.72

                                                
                                                
I1014 07:13:56.555734   12516 main.go:141] libmachine: [stderr =====>] : 
I1014 07:13:56.555734   12516 sshutil.go:53] new ssh client: &{IP:172.20.99.72 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-572000\id_rsa Username:docker}
I1014 07:13:56.682358   12516 ssh_runner.go:235] Completed: systemctl --version: (4.8998876s)
I1014 07:13:56.693262   12516 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (7.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (7.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 image ls --format json --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 image ls --format json --alsologtostderr: (7.5961486s)
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-572000 image ls --format json --alsologtostderr:
[{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"7f553e8bbc897571642d836b31eaf6ecbe395d7641c2b24291356ed28f3f2bd0","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[
],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"3833dfc17f2f466f622fdce1c38e1305962c71ac960e9270224015ead35a6a22","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-5
72000"],"size":"30"},{"id":"cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-572000"],"size":"4940000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-572000 image ls --format json --alsologtostderr:
I1014 07:13:41.929532    7056 out.go:345] Setting OutFile to fd 1636 ...
I1014 07:13:41.930999    7056 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 07:13:41.930999    7056 out.go:358] Setting ErrFile to fd 1588...
I1014 07:13:41.930999    7056 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 07:13:41.950481    7056 config.go:182] Loaded profile config "functional-572000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1014 07:13:41.951315    7056 config.go:182] Loaded profile config "functional-572000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1014 07:13:41.951596    7056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-572000 ).state
I1014 07:13:44.235591    7056 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1014 07:13:44.235591    7056 main.go:141] libmachine: [stderr =====>] : 
I1014 07:13:44.248317    7056 ssh_runner.go:195] Run: systemctl --version
I1014 07:13:44.248317    7056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-572000 ).state
I1014 07:13:46.490814    7056 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1014 07:13:46.490814    7056 main.go:141] libmachine: [stderr =====>] : 
I1014 07:13:46.491049    7056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-572000 ).networkadapters[0]).ipaddresses[0]
I1014 07:13:49.215288    7056 main.go:141] libmachine: [stdout =====>] : 172.20.99.72

                                                
                                                
I1014 07:13:49.215407    7056 main.go:141] libmachine: [stderr =====>] : 
I1014 07:13:49.215541    7056 sshutil.go:53] new ssh client: &{IP:172.20.99.72 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-572000\id_rsa Username:docker}
I1014 07:13:49.316196    7056 ssh_runner.go:235] Completed: systemctl --version: (5.0678729s)
I1014 07:13:49.326172    7056 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (7.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (7.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 image ls --format yaml --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 image ls --format yaml --alsologtostderr: (7.5882496s)
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-572000 image ls --format yaml --alsologtostderr:
- id: 3833dfc17f2f466f622fdce1c38e1305962c71ac960e9270224015ead35a6a22
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-572000
size: "30"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 7f553e8bbc897571642d836b31eaf6ecbe395d7641c2b24291356ed28f3f2bd0
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-572000
size: "4940000"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-572000 image ls --format yaml --alsologtostderr:
I1014 07:13:34.341501    1936 out.go:345] Setting OutFile to fd 1224 ...
I1014 07:13:34.364674    1936 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 07:13:34.364674    1936 out.go:358] Setting ErrFile to fd 1520...
I1014 07:13:34.364746    1936 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 07:13:34.384572    1936 config.go:182] Loaded profile config "functional-572000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1014 07:13:34.385574    1936 config.go:182] Loaded profile config "functional-572000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1014 07:13:34.386573    1936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-572000 ).state
I1014 07:13:36.636172    1936 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1014 07:13:36.636243    1936 main.go:141] libmachine: [stderr =====>] : 
I1014 07:13:36.651294    1936 ssh_runner.go:195] Run: systemctl --version
I1014 07:13:36.651294    1936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-572000 ).state
I1014 07:13:38.944658    1936 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1014 07:13:38.944766    1936 main.go:141] libmachine: [stderr =====>] : 
I1014 07:13:38.944766    1936 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-572000 ).networkadapters[0]).ipaddresses[0]
I1014 07:13:41.581952    1936 main.go:141] libmachine: [stdout =====>] : 172.20.99.72

                                                
                                                
I1014 07:13:41.581952    1936 main.go:141] libmachine: [stderr =====>] : 
I1014 07:13:41.582242    1936 sshutil.go:53] new ssh client: &{IP:172.20.99.72 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-572000\id_rsa Username:docker}
I1014 07:13:41.690980    1936 ssh_runner.go:235] Completed: systemctl --version: (5.0396798s)
I1014 07:13:41.710250    1936 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (7.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (28.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-572000 ssh pgrep buildkitd: exit status 1 (9.6709638s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 image build -t localhost/my-image:functional-572000 testdata\build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 image build -t localhost/my-image:functional-572000 testdata\build --alsologtostderr: (11.0469872s)
functional_test.go:323: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-572000 image build -t localhost/my-image:functional-572000 testdata\build --alsologtostderr:
I1014 07:13:46.441613   12368 out.go:345] Setting OutFile to fd 1732 ...
I1014 07:13:46.461591   12368 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 07:13:46.461591   12368 out.go:358] Setting ErrFile to fd 1256...
I1014 07:13:46.461654   12368 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 07:13:46.481159   12368 config.go:182] Loaded profile config "functional-572000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1014 07:13:46.503330   12368 config.go:182] Loaded profile config "functional-572000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1014 07:13:46.504664   12368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-572000 ).state
I1014 07:13:48.808782   12368 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1014 07:13:48.808883   12368 main.go:141] libmachine: [stderr =====>] : 
I1014 07:13:48.821638   12368 ssh_runner.go:195] Run: systemctl --version
I1014 07:13:48.821638   12368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-572000 ).state
I1014 07:13:51.087122   12368 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I1014 07:13:51.087122   12368 main.go:141] libmachine: [stderr =====>] : 
I1014 07:13:51.087260   12368 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-572000 ).networkadapters[0]).ipaddresses[0]
I1014 07:13:53.686886   12368 main.go:141] libmachine: [stdout =====>] : 172.20.99.72

                                                
                                                
I1014 07:13:53.686886   12368 main.go:141] libmachine: [stderr =====>] : 
I1014 07:13:53.687187   12368 sshutil.go:53] new ssh client: &{IP:172.20.99.72 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-572000\id_rsa Username:docker}
I1014 07:13:53.782161   12368 ssh_runner.go:235] Completed: systemctl --version: (4.960517s)
I1014 07:13:53.782302   12368 build_images.go:161] Building image from path: C:\Users\jenkins.minikube1\AppData\Local\Temp\build.3778214602.tar
I1014 07:13:53.794027   12368 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1014 07:13:53.829404   12368 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3778214602.tar
I1014 07:13:53.839930   12368 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3778214602.tar: stat -c "%s %y" /var/lib/minikube/build/build.3778214602.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3778214602.tar': No such file or directory
I1014 07:13:53.840056   12368 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\AppData\Local\Temp\build.3778214602.tar --> /var/lib/minikube/build/build.3778214602.tar (3072 bytes)
I1014 07:13:53.905961   12368 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3778214602
I1014 07:13:53.939885   12368 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3778214602 -xf /var/lib/minikube/build/build.3778214602.tar
I1014 07:13:53.959688   12368 docker.go:360] Building image: /var/lib/minikube/build/build.3778214602
I1014 07:13:53.970609   12368 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-572000 /var/lib/minikube/build/build.3778214602
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#4 ...

                                                
                                                
#5 [internal] load build context
#5 transferring context: 62B done
#5 DONE 0.1s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#4 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#4 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.2s done
#4 DONE 0.9s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:64a97bf05d7189c41d9e74114232b81cce14a7a05ecd48079466e681dcb3a88e done
#8 naming to localhost/my-image:functional-572000 0.0s done
#8 DONE 0.2s
I1014 07:13:57.254053   12368 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-572000 /var/lib/minikube/build/build.3778214602: (3.2833632s)
I1014 07:13:57.266472   12368 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3778214602
I1014 07:13:57.312416   12368 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3778214602.tar
I1014 07:13:57.335592   12368 build_images.go:217] Built localhost/my-image:functional-572000 from C:\Users\jenkins.minikube1\AppData\Local\Temp\build.3778214602.tar
I1014 07:13:57.335759   12368 build_images.go:133] succeeded building to: functional-572000
I1014 07:13:57.335759   12368 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 image ls
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 image ls: (7.3571013s)
E1014 07:14:10.844440     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 07:15:33.918831     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (28.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.9880875s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-572000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (17.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 image load --daemon kicbase/echo-server:functional-572000 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 image load --daemon kicbase/echo-server:functional-572000 --alsologtostderr: (9.5543211s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 image ls
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 image ls: (7.768856s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (17.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (8.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 version -o=json --components: (8.0505815s)
--- PASS: TestFunctional/parallel/Version/components (8.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (16.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-572000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-572000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-wkzq6" [4a3937ec-81ff-49c6-97ac-0c85de9ffec4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-wkzq6" [4a3937ec-81ff-49c6-97ac-0c85de9ffec4] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 16.0089022s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (16.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (17.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 image load --daemon kicbase/echo-server:functional-572000 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 image load --daemon kicbase/echo-server:functional-572000 --alsologtostderr: (8.997084s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 image ls
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 image ls: (8.9214412s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (17.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (14.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 service list
functional_test.go:1459: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 service list: (14.1916779s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (14.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-572000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-572000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-572000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 4852: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 796: TerminateProcess: Access is denied.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-572000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 service list -o json
functional_test.go:1489: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 service list -o json: (15.0323375s)
functional_test.go:1494: Took "15.0325066s" to run "out/minikube-windows-amd64.exe -p functional-572000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (15.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (19.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-572000
functional_test.go:245: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 image load --daemon kicbase/echo-server:functional-572000 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 image load --daemon kicbase/echo-server:functional-572000 --alsologtostderr: (9.8742619s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 image ls
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 image ls: (8.159162s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (19.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-572000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-572000 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [a83adf21-1003-438a-88e2-597daff1647b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [a83adf21-1003-438a-88e2-597daff1647b] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 16.0079998s
I1014 07:11:29.787818     936 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 image save kicbase/echo-server:functional-572000 C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 image save kicbase/echo-server:functional-572000 C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr: (8.0649063s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-572000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 9000: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 6264: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (15.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 image rm kicbase/echo-server:functional-572000 --alsologtostderr
functional_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 image rm kicbase/echo-server:functional-572000 --alsologtostderr: (7.8268849s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 image ls
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 image ls: (7.7607673s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (15.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (15.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 image load C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 image load C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr: (8.1125579s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 image ls
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 image ls: (7.7667184s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (15.88s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (14.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1275: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (13.9377193s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (14.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (8.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-572000
functional_test.go:424: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 image save --daemon kicbase/echo-server:functional-572000 --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 image save --daemon kicbase/echo-server:functional-572000 --alsologtostderr: (8.750174s)
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-572000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (8.99s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (14.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1310: (dbg) Done: out/minikube-windows-amd64.exe profile list: (13.9845354s)
functional_test.go:1315: Took "13.9846998s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1329: Took "197.3588ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (14.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (14.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1361: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (13.9780113s)
functional_test.go:1366: Took "13.9782569s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1379: Took "207.0773ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (14.19s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (44.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:499: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-572000 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-572000"
functional_test.go:499: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-572000 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-572000": (30.2901981s)
functional_test.go:522: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-572000 docker-env | Invoke-Expression ; docker images"
functional_test.go:522: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-572000 docker-env | Invoke-Expression ; docker images": (14.3154174s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (44.62s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 update-context --alsologtostderr -v=2: (2.4302935s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 update-context --alsologtostderr -v=2: (2.458422s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-572000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-windows-amd64.exe -p functional-572000 update-context --alsologtostderr -v=2: (2.5326067s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.54s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.24s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-572000
--- PASS: TestFunctional/delete_echo-server_images (0.24s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-572000
--- PASS: TestFunctional/delete_my-image_image (0.10s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-572000
--- PASS: TestFunctional/delete_minikube_cached_images (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-132600 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.18s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (193.12s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-686600 --driver=hyperv
E1014 07:53:40.458599     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 07:54:10.846806     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 07:55:37.377646     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-686600 --driver=hyperv: (3m13.1232934s)
--- PASS: TestImageBuild/serial/Setup (193.12s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (10.25s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-686600
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-686600: (10.2462322s)
--- PASS: TestImageBuild/serial/NormalBuild (10.25s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (8.63s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-686600
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-686600: (8.6339707s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (8.63s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (8.02s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-686600
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-686600: (8.0207177s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (8.02s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (8.08s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-686600
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-686600: (8.078605s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (8.08s)

                                                
                                    
x
+
TestJSONOutput/start/Command (198.28s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-362300 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E1014 07:59:10.846909     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 08:00:37.378677     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-362300 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m18.2755315s)
--- PASS: TestJSONOutput/start/Command (198.28s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (7.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-362300 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-362300 --output=json --user=testUser: (7.7332831s)
--- PASS: TestJSONOutput/pause/Command (7.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (7.73s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-362300 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-362300 --output=json --user=testUser: (7.7287665s)
--- PASS: TestJSONOutput/unpause/Command (7.73s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (39.21s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-362300 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-362300 --output=json --user=testUser: (39.2074695s)
--- PASS: TestJSONOutput/stop/Command (39.21s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.84s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-985500 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-985500 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (209.7744ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4d23ba7e-0f9b-4199-bb24-2a5df29d127a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-985500] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"028a5d8c-aa69-459c-9faf-47c752476340","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube1\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"6d946030-332b-49bf-911b-e49a8a96d6ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"693be51b-6b5e-48f3-a5ba-d76c13f3a925","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"2f3f7f1a-a11d-46a2-b77d-829e7b1cf70e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19790"}}
	{"specversion":"1.0","id":"da2ea272-f22c-47b4-a0b6-d12fb14154f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"96afd2de-c9bc-4e49-826e-3cfe433dbaea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-985500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-985500
--- PASS: TestErrorJSONOutput (0.84s)

                                                
                                    
x
+
TestMainNoArgs (0.19s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.19s)

                                                
                                    
x
+
TestMinikubeProfile (517.8s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-497100 --driver=hyperv
E1014 08:04:10.847449     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-497100 --driver=hyperv: (3m12.0993374s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-497100 --driver=hyperv
E1014 08:05:33.931323     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 08:05:37.378826     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-497100 --driver=hyperv: (3m14.1029256s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-497100
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (23.3022351s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-497100
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (23.3242042s)
helpers_test.go:175: Cleaning up "second-497100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-497100
E1014 08:09:10.848164     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-497100: (39.6404063s)
helpers_test.go:175: Cleaning up "first-497100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-497100
E1014 08:10:20.462451     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-497100: (44.8011883s)
--- PASS: TestMinikubeProfile (517.80s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (151.54s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-953800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E1014 08:10:37.379038     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-953800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m30.5405115s)
--- PASS: TestMountStart/serial/StartWithMountFirst (151.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-953800 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-953800 ssh -- ls /minikube-host: (9.363154s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (152.6s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-953800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E1014 08:14:10.848573     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 08:15:37.379587     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-953800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m31.5939121s)
--- PASS: TestMountStart/serial/StartWithMountSecond (152.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-953800 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-953800 ssh -- ls /minikube-host: (9.291961s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (30.08s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-953800 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-953800 --alsologtostderr -v=5: (30.0827189s)
--- PASS: TestMountStart/serial/DeleteFirst (30.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (9.15s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-953800 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-953800 ssh -- ls /minikube-host: (9.1457441s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (9.15s)

                                                
                                    
x
+
TestMountStart/serial/Stop (29.07s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-953800
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-953800: (29.0732852s)
--- PASS: TestMountStart/serial/Stop (29.07s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (114.77s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-953800
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-953800: (1m53.7635487s)
--- PASS: TestMountStart/serial/RestartStopped (114.77s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (9.06s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-953800 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-953800 ssh -- ls /minikube-host: (9.0590829s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (9.06s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (424.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-671000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E1014 08:20:37.380239     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 08:22:13.935385     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 08:24:10.849485     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 08:25:37.379830     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-671000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m41.1362097s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 status --alsologtostderr: (23.1229487s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (424.26s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-671000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-671000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-671000 -- rollout status deployment/busybox: (3.5474016s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-671000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-671000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-671000 -- exec busybox-7dff88458-bnqj6 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-671000 -- exec busybox-7dff88458-bnqj6 -- nslookup kubernetes.io: (1.8958339s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-671000 -- exec busybox-7dff88458-vlp7j -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-671000 -- exec busybox-7dff88458-bnqj6 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-671000 -- exec busybox-7dff88458-vlp7j -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-671000 -- exec busybox-7dff88458-bnqj6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-671000 -- exec busybox-7dff88458-vlp7j -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.03s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (232.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-671000 -v 3 --alsologtostderr
E1014 08:29:10.850007     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 08:30:37.381831     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-671000 -v 3 --alsologtostderr: (3m18.0439207s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 status --alsologtostderr: (34.5986004s)
--- PASS: TestMultiNode/serial/AddNode (232.64s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-671000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (34.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (34.7327445s)
--- PASS: TestMultiNode/serial/ProfileList (34.73s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (349.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 status --output json --alsologtostderr: (34.496273s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 cp testdata\cp-test.txt multinode-671000:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 cp testdata\cp-test.txt multinode-671000:/home/docker/cp-test.txt: (9.0481042s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000 "sudo cat /home/docker/cp-test.txt": (8.9786538s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 cp multinode-671000:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile867123417\001\cp-test_multinode-671000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 cp multinode-671000:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile867123417\001\cp-test_multinode-671000.txt: (9.2537028s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000 "sudo cat /home/docker/cp-test.txt": (9.1935032s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 cp multinode-671000:/home/docker/cp-test.txt multinode-671000-m02:/home/docker/cp-test_multinode-671000_multinode-671000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 cp multinode-671000:/home/docker/cp-test.txt multinode-671000-m02:/home/docker/cp-test_multinode-671000_multinode-671000-m02.txt: (16.177729s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000 "sudo cat /home/docker/cp-test.txt": (9.2490897s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000-m02 "sudo cat /home/docker/cp-test_multinode-671000_multinode-671000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000-m02 "sudo cat /home/docker/cp-test_multinode-671000_multinode-671000-m02.txt": (9.2803192s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 cp multinode-671000:/home/docker/cp-test.txt multinode-671000-m03:/home/docker/cp-test_multinode-671000_multinode-671000-m03.txt
E1014 08:34:10.850419     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 cp multinode-671000:/home/docker/cp-test.txt multinode-671000-m03:/home/docker/cp-test_multinode-671000_multinode-671000-m03.txt: (15.987495s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000 "sudo cat /home/docker/cp-test.txt": (9.0646698s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000-m03 "sudo cat /home/docker/cp-test_multinode-671000_multinode-671000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000-m03 "sudo cat /home/docker/cp-test_multinode-671000_multinode-671000-m03.txt": (9.0812013s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 cp testdata\cp-test.txt multinode-671000-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 cp testdata\cp-test.txt multinode-671000-m02:/home/docker/cp-test.txt: (9.2165171s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000-m02 "sudo cat /home/docker/cp-test.txt": (9.1233207s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 cp multinode-671000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile867123417\001\cp-test_multinode-671000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 cp multinode-671000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile867123417\001\cp-test_multinode-671000-m02.txt: (9.1592775s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000-m02 "sudo cat /home/docker/cp-test.txt": (9.1111811s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 cp multinode-671000-m02:/home/docker/cp-test.txt multinode-671000:/home/docker/cp-test_multinode-671000-m02_multinode-671000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 cp multinode-671000-m02:/home/docker/cp-test.txt multinode-671000:/home/docker/cp-test_multinode-671000-m02_multinode-671000.txt: (15.9229522s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000-m02 "sudo cat /home/docker/cp-test.txt"
E1014 08:35:37.381306     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000-m02 "sudo cat /home/docker/cp-test.txt": (9.1145523s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000 "sudo cat /home/docker/cp-test_multinode-671000-m02_multinode-671000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000 "sudo cat /home/docker/cp-test_multinode-671000-m02_multinode-671000.txt": (9.0948824s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 cp multinode-671000-m02:/home/docker/cp-test.txt multinode-671000-m03:/home/docker/cp-test_multinode-671000-m02_multinode-671000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 cp multinode-671000-m02:/home/docker/cp-test.txt multinode-671000-m03:/home/docker/cp-test_multinode-671000-m02_multinode-671000-m03.txt: (15.9773089s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000-m02 "sudo cat /home/docker/cp-test.txt": (9.144199s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000-m03 "sudo cat /home/docker/cp-test_multinode-671000-m02_multinode-671000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000-m03 "sudo cat /home/docker/cp-test_multinode-671000-m02_multinode-671000-m03.txt": (9.1224615s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 cp testdata\cp-test.txt multinode-671000-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 cp testdata\cp-test.txt multinode-671000-m03:/home/docker/cp-test.txt: (9.0448037s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000-m03 "sudo cat /home/docker/cp-test.txt": (9.1282636s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 cp multinode-671000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile867123417\001\cp-test_multinode-671000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 cp multinode-671000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile867123417\001\cp-test_multinode-671000-m03.txt: (9.1135499s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000-m03 "sudo cat /home/docker/cp-test.txt": (9.0826596s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 cp multinode-671000-m03:/home/docker/cp-test.txt multinode-671000:/home/docker/cp-test_multinode-671000-m03_multinode-671000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 cp multinode-671000-m03:/home/docker/cp-test.txt multinode-671000:/home/docker/cp-test_multinode-671000-m03_multinode-671000.txt: (15.9284735s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000-m03 "sudo cat /home/docker/cp-test.txt": (9.2041016s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000 "sudo cat /home/docker/cp-test_multinode-671000-m03_multinode-671000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000 "sudo cat /home/docker/cp-test_multinode-671000-m03_multinode-671000.txt": (9.0420907s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 cp multinode-671000-m03:/home/docker/cp-test.txt multinode-671000-m02:/home/docker/cp-test_multinode-671000-m03_multinode-671000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 cp multinode-671000-m03:/home/docker/cp-test.txt multinode-671000-m02:/home/docker/cp-test_multinode-671000-m03_multinode-671000-m02.txt: (15.932065s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000-m03 "sudo cat /home/docker/cp-test.txt": (9.1072407s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000-m02 "sudo cat /home/docker/cp-test_multinode-671000-m03_multinode-671000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 ssh -n multinode-671000-m02 "sudo cat /home/docker/cp-test_multinode-671000-m03_multinode-671000-m02.txt": (9.0294905s)
--- PASS: TestMultiNode/serial/CopyFile (349.43s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (74.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 node stop m03: (24.5800815s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 status
E1014 08:38:53.939652     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-671000 status: exit status 7 (25.0016283s)

                                                
                                                
-- stdout --
	multinode-671000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-671000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-671000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 status --alsologtostderr
E1014 08:39:10.850437     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-671000 status --alsologtostderr: exit status 7 (25.1789033s)

                                                
                                                
-- stdout --
	multinode-671000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-671000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-671000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 08:38:55.224711    5036 out.go:345] Setting OutFile to fd 1204 ...
	I1014 08:38:55.226342    5036 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 08:38:55.226342    5036 out.go:358] Setting ErrFile to fd 1080...
	I1014 08:38:55.226342    5036 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 08:38:55.243872    5036 out.go:352] Setting JSON to false
	I1014 08:38:55.243950    5036 mustload.go:65] Loading cluster: multinode-671000
	I1014 08:38:55.244042    5036 notify.go:220] Checking for updates...
	I1014 08:38:55.244389    5036 config.go:182] Loaded profile config "multinode-671000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 08:38:55.244389    5036 status.go:174] checking status of multinode-671000 ...
	I1014 08:38:55.246087    5036 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:38:57.364438    5036 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:38:57.364534    5036 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:38:57.364534    5036 status.go:371] multinode-671000 host status = "Running" (err=<nil>)
	I1014 08:38:57.364534    5036 host.go:66] Checking if "multinode-671000" exists ...
	I1014 08:38:57.365425    5036 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:38:59.462330    5036 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:38:59.463063    5036 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:38:59.463063    5036 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:39:01.953967    5036 main.go:141] libmachine: [stdout =====>] : 172.20.100.167
	
	I1014 08:39:01.955077    5036 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:39:01.955077    5036 host.go:66] Checking if "multinode-671000" exists ...
	I1014 08:39:01.968908    5036 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 08:39:01.968908    5036 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000 ).state
	I1014 08:39:04.009047    5036 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:39:04.009047    5036 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:39:04.009269    5036 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000 ).networkadapters[0]).ipaddresses[0]
	I1014 08:39:06.482877    5036 main.go:141] libmachine: [stdout =====>] : 172.20.100.167
	
	I1014 08:39:06.483211    5036 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:39:06.483211    5036 sshutil.go:53] new ssh client: &{IP:172.20.100.167 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000\id_rsa Username:docker}
	I1014 08:39:06.587014    5036 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.6179928s)
	I1014 08:39:06.598969    5036 ssh_runner.go:195] Run: systemctl --version
	I1014 08:39:06.617490    5036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 08:39:06.646115    5036 kubeconfig.go:125] found "multinode-671000" server: "https://172.20.100.167:8443"
	I1014 08:39:06.646115    5036 api_server.go:166] Checking apiserver status ...
	I1014 08:39:06.659878    5036 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 08:39:06.700078    5036 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2051/cgroup
	W1014 08:39:06.717373    5036 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2051/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1014 08:39:06.728826    5036 ssh_runner.go:195] Run: ls
	I1014 08:39:06.735104    5036 api_server.go:253] Checking apiserver healthz at https://172.20.100.167:8443/healthz ...
	I1014 08:39:06.743995    5036 api_server.go:279] https://172.20.100.167:8443/healthz returned 200:
	ok
	I1014 08:39:06.743995    5036 status.go:463] multinode-671000 apiserver status = Running (err=<nil>)
	I1014 08:39:06.743995    5036 status.go:176] multinode-671000 status: &{Name:multinode-671000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 08:39:06.744124    5036 status.go:174] checking status of multinode-671000-m02 ...
	I1014 08:39:06.744993    5036 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:39:08.882828    5036 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:39:08.883501    5036 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:39:08.883559    5036 status.go:371] multinode-671000-m02 host status = "Running" (err=<nil>)
	I1014 08:39:08.883559    5036 host.go:66] Checking if "multinode-671000-m02" exists ...
	I1014 08:39:08.884284    5036 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:39:10.992080    5036 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:39:10.992080    5036 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:39:10.992080    5036 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:39:13.446194    5036 main.go:141] libmachine: [stdout =====>] : 172.20.109.137
	
	I1014 08:39:13.446515    5036 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:39:13.446607    5036 host.go:66] Checking if "multinode-671000-m02" exists ...
	I1014 08:39:13.457887    5036 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 08:39:13.457887    5036 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m02 ).state
	I1014 08:39:15.556830    5036 main.go:141] libmachine: [stdout =====>] : Running
	
	I1014 08:39:15.557192    5036 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:39:15.557282    5036 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-671000-m02 ).networkadapters[0]).ipaddresses[0]
	I1014 08:39:18.064412    5036 main.go:141] libmachine: [stdout =====>] : 172.20.109.137
	
	I1014 08:39:18.064806    5036 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:39:18.065064    5036 sshutil.go:53] new ssh client: &{IP:172.20.109.137 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-671000-m02\id_rsa Username:docker}
	I1014 08:39:18.172269    5036 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7143744s)
	I1014 08:39:18.183405    5036 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 08:39:18.210007    5036 status.go:176] multinode-671000-m02 status: &{Name:multinode-671000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1014 08:39:18.210007    5036 status.go:174] checking status of multinode-671000-m03 ...
	I1014 08:39:18.210635    5036 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-671000-m03 ).state
	I1014 08:39:20.251406    5036 main.go:141] libmachine: [stdout =====>] : Off
	
	I1014 08:39:20.252008    5036 main.go:141] libmachine: [stderr =====>] : 
	I1014 08:39:20.252008    5036 status.go:371] multinode-671000-m03 host status = "Stopped" (err=<nil>)
	I1014 08:39:20.252008    5036 status.go:384] host is not running, skipping remaining checks
	I1014 08:39:20.252008    5036 status.go:176] multinode-671000-m03 status: &{Name:multinode-671000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (74.76s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (190.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 node start m03 -v=7 --alsologtostderr
E1014 08:40:37.381540     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 node start m03 -v=7 --alsologtostderr: (2m35.4055743s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-671000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-671000 status -v=7 --alsologtostderr: (34.4422259s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (190.03s)

                                                
                                    
x
+
TestPreload (495.57s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-998000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E1014 08:54:10.852168     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 08:55:33.943858     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 08:55:37.383437     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-998000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (3m49.9267294s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-998000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-998000 image pull gcr.io/k8s-minikube/busybox: (8.3148956s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-998000
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-998000: (38.7692453s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-998000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E1014 08:59:10.853587     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-998000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m49.8583649s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-998000 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-998000 image list: (7.1341196s)
helpers_test.go:175: Cleaning up "test-preload-998000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-998000
E1014 09:00:20.477640     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E1014 09:00:37.383491     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-998000: (41.5619384s)
--- PASS: TestPreload (495.57s)

                                                
                                    
x
+
TestScheduledStopWindows (322.57s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-706600 --memory=2048 --driver=hyperv
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-706600 --memory=2048 --driver=hyperv: (3m12.1839578s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-706600 --schedule 5m
E1014 09:04:10.853507     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-706600 --schedule 5m: (10.3157051s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-706600 -n scheduled-stop-706600
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-706600 -n scheduled-stop-706600: exit status 1 (10.013181s)
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-706600 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-706600 -- sudo systemctl show minikube-scheduled-stop --no-page: (9.2637574s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-706600 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-706600 --schedule 5s: (10.3015783s)
E1014 09:05:37.385233     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-706600
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-706600: exit status 7 (2.2510086s)

                                                
                                                
-- stdout --
	scheduled-stop-706600
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-706600 -n scheduled-stop-706600
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-706600 -n scheduled-stop-706600: exit status 7 (2.2447197s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-706600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-706600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-706600: (25.9932063s)
--- PASS: TestScheduledStopWindows (322.57s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (1042.24s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.3926989761.exe start -p running-upgrade-827800 --memory=2200 --vm-driver=hyperv
E1014 09:09:10.855038     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\addons-043000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.3926989761.exe start -p running-upgrade-827800 --memory=2200 --vm-driver=hyperv: (8m14.7127193s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-827800 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E1014 09:15:37.384916     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-827800 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (7m59.5552083s)
helpers_test.go:175: Cleaning up "running-upgrade-827800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-827800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-827800: (1m6.3125347s)
--- PASS: TestRunningBinaryUpgrade (1042.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-204300 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-204300 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (311.9913ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-204300] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (774.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.3730680273.exe start -p stopped-upgrade-272200 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.3730680273.exe start -p stopped-upgrade-272200 --memory=2200 --vm-driver=hyperv: (6m1.9054027s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.3730680273.exe -p stopped-upgrade-272200 stop
E1014 09:17:00.481457     936 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\functional-572000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.26.0.3730680273.exe -p stopped-upgrade-272200 stop: (35.4938043s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-272200 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-272200 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (6m17.0222122s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (774.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (9.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-272200
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-272200: (9.6755981s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (9.68s)

                                                
                                    

Test skip (31/203)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:968: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-572000 --alsologtostderr -v=1]
functional_test.go:916: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-572000 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 8372: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-572000 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:974: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-572000 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0331017s)

                                                
                                                
-- stdout --
	* [functional-572000] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:12:17.046982    2680 out.go:345] Setting OutFile to fd 1276 ...
	I1014 07:12:17.048541    2680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:12:17.048794    2680 out.go:358] Setting ErrFile to fd 1696...
	I1014 07:12:17.048874    2680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:12:17.083875    2680 out.go:352] Setting JSON to false
	I1014 07:12:17.090349    2680 start.go:129] hostinfo: {"hostname":"minikube1","uptime":100651,"bootTime":1728814485,"procs":212,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5011 Build 19045.5011","kernelVersion":"10.0.19045.5011 Build 19045.5011","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1014 07:12:17.091315    2680 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:12:17.100242    2680 out.go:177] * [functional-572000] minikube v1.34.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	I1014 07:12:17.103316    2680 notify.go:220] Checking for updates...
	I1014 07:12:17.108499    2680 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:12:17.111546    2680 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:12:17.113935    2680 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1014 07:12:17.116566    2680 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:12:17.118552    2680 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:12:17.122563    2680 config.go:182] Loaded profile config "functional-572000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:12:17.123567    2680 driver.go:394] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:980: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-572000 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-572000 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0342485s)

                                                
                                                
-- stdout --
	* [functional-572000] minikube v1.34.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 07:12:22.062811    6372 out.go:345] Setting OutFile to fd 1628 ...
	I1014 07:12:22.063803    6372 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:12:22.063803    6372 out.go:358] Setting ErrFile to fd 1080...
	I1014 07:12:22.063803    6372 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 07:12:22.093805    6372 out.go:352] Setting JSON to false
	I1014 07:12:22.097814    6372 start.go:129] hostinfo: {"hostname":"minikube1","uptime":100656,"bootTime":1728814485,"procs":211,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5011 Build 19045.5011","kernelVersion":"10.0.19045.5011 Build 19045.5011","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W1014 07:12:22.098787    6372 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1014 07:12:22.105140    6372 out.go:177] * [functional-572000] minikube v1.34.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.5011 Build 19045.5011
	I1014 07:12:22.108123    6372 notify.go:220] Checking for updates...
	I1014 07:12:22.111111    6372 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I1014 07:12:22.113122    6372 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 07:12:22.116121    6372 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I1014 07:12:22.119125    6372 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 07:12:22.122113    6372 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 07:12:22.125113    6372 config.go:182] Loaded profile config "functional-572000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1014 07:12:22.127128    6372 driver.go:394] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1025: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard